Christopher M. Boyer
  1. 2023-07-24

    Over the last year I’ve been developing a design system / component library to use in my side-projects. At the base is a component called Block, upon which most of the styles get applied. The interface of Block focuses on configuration of atomic properties at the React component prop level. So for example, setting the padding and margin of a button is done like this:

    <Block padding="0.25" marginRight="1" tagName="button">
      Hello World
    </Block>
    

    I wanted the implementation to have a very strict, type-safe interface, but the repetitive nature of it was starting to weigh on me.

    interface BlockProps {
    	padding: string;
    	paddingTop: string;
    	paddingBottom: string;
    	paddingLeft: string;
    	paddingRight: string;
    	margin: string;
    	marginTop: string;
    	marginBottom: string;
    	marginLeft: string;
    	marginRight: string;
    }
    

    Maintaining this was time-consuming, and updating did not scale particularly well. Plus margin and padding props were basically the same except their name. I resolved to abuse every feature TypeScript afforded me to do this. At the center of it all is this type:

    export type Mapping<T extends string, R = string> = {
        [K in T]: R;
    };
    

    This is a mapped type. It allows me to define a type that has the keys R, and the type T for each of those keys (It was also brought to my attention that Mapping is basically just a backwards Record, so use that instead if you intend to walk this path). For padding, to have each directional prop and an overall padding, I’d define something like this:

    type PaddingProps = Mapping<string, 'padding' | 'paddingTop' | 'paddingBottom' | 'paddingLeft' | 'paddingRight';
    

    which is equivalent to

    interface BlockProps {
    	padding: string;
    	paddingTop: string;
    	paddingBottom: string;
    	paddingLeft: string;
    	paddingRight: string;
    }
    

    Looking at this, I wondered if I could decouple padding from Top, Bottom, Left, and Right, especially as I was adding more props that had directions variants. To do this, I created a generic type consisting of a union type of the type itself, and template literal types for each direction. Then I used that in the Mapping instead of an explicit union type.

    type DirectionOptions<T extends string> =
        | `${T}Top`
        | `${T}Bottom`
        | `${T}Left`
        | `${T}Right`
        | `${T}`;
    
    type PaddingProps = Mapping<string, DirectionOptions<'padding'>>;
    

    This also lets us easily define the directional props for margin.

    type MarginProps = Mapping<string, DirectionOptions<'margin'>>;
    

    Finally, I wanted to make it so that you could set the hover styles for properties. The way I wanted them to manifest was to have an additional field for each already existing field that appended the text Hover. paddingTop would have an associated paddingTopHover, paddingBottom and paddingBottomHover, etc. This combines the technique used for Mapping with the keyof operator. For each key in the supplied type, a new property is added using a template literal to expand its name to include Hover.

    export type Hoverable<Type> = {
        [Property in keyof Type as `${string & Property}Hover`]: Type[Property];
    };
    

    Applying Hoverable to BasePaddingProps below produces a type with the expected xxxHover, props:

    interface BasePaddingProps {
    	padding: string;
    	paddingTop: string;
    	paddingBottom: string;
    	paddingLeft: string;
    	paddingRight: string;
    }
    
    type PaddingProps = Hoverable<BasePaddingProps>;
    
    interface PaddingProps { // equivalent to above ^^^
    	padding: string;
    	paddingTop: string;
    	paddingBottom: string;
    	paddingLeft: string;
    	paddingRight: string;
    	paddingHover: string;
    	paddingTopHover: string;
    	paddingBottomHover: string;
    	paddingLeftHover: string;
    	paddingRightHover: string;
    }
    

    Combining it with our previous properties, we can easily generate hoverable padding and margin types:

    type PaddingProps = Hoverable<Mapping<string, DirectionOptions<'padding'>>>;
    type MarginProps = Hoverable<Mapping<string, DirectionOptions<'margin'>>>;
    

    This implementation is certainly a tradeoff. The interface it produces is really clean. It lets me move fast, and adding additions onto this Block infrastructure is simple. However, from an outsider’s perspective, as I found out sharing my journey piecemeal with my friends and peers, it seems kind of whack. There’s something about

    [Property in keyof Type as`${string & Property}Hover`]: Type[Property];
    

    that doesn’t quite roll off the tongue very well.

  2. 2021-09-26

    Seventy-nine days after packing up my things, trekking across the Appalachian Mountains, and settling in Manhattan’s Upper West Side, I’m turning around and reversing the process.

    When I took the job that would ultimately bring me here, I spoke with absolute confidence that New York City wasn’t just some place I thought I wanted to be. It was the place I needed to be. However, I’ve come to find that living here makes me absolutely miserable. How miserable? has been hard to universally impress upon people, so you’ll have to take my word for it when I say this is the saddest I’ve ever been.

    It’s been a lengthy journey of self-reflection as to how I could have made such a catastrophic miscalculation. Divulging every aspect would be difficult; partly because of how personal it is and partly because it’s kind of a long story. But I’m sure now that this isn’t the right place for me. Hours of advice from family, friends, and professionals made me realize that the value of new roots planted here aren’t worth the cost when compared to cultivating the ones I have deeper, elsewhere.

    That said, I feel the need to make one thing absolutely clear: don’t feel too bad for me. Yeah it sucks, but I truly have no regrets about the move to New York. I believed I needed to be here, so without coming I couldn’t have known the truth. I am incredibly grateful to have had the chance to get this life choice so wrong and learn something so valuable from it.

    Finally, What’s next? Everyone’s asked me this, and, honestly, you know about as much as I do. I have no real long-term plan at this point. I’m going home, I’m taking a break, and I’ll figure it out from there.

    Selfie on the Brooklyn Bridge with Manhattan in the background

  3. 2020-06-01

    At FordLabs, we were working on an engagement with the customer experience division at Ford. The project itself was referred to as "Owner", and the broadest description of our goal was "to improve the owner experience." That ultimately manifested in the building of https://myfordvehicle.ford.com. It is a portal for getting information about your vehicle, initially by its year-make-model, and later by its VIN. At the time of writing this the 2020 Ford F-150's page looked like this.

    Image of F-150 owner page

    FordLabs' participation in this project wound down last week, and a part of that winding down included a lot of reflecting on what went well and what didn't go well. We walked away from this project with many lessons learned, but I want to focus on one in particular: continuous deployment and how it led us to success. Between April 24th and May 29th we did sixty-three production builds. At FordLabs, we aspire to ship to production as often as possible, but never in my nearly-two years at the office have I come close to doing it that many times for one product, let alone in that short of a timeframe.

    We didn't start out like that, though. Our first code commit was in March, and by early April we were in production. From there, we repeatedly ran into a problem with our deployments. Whenever we wanted to deploy to production, we would manually trigger a build in Jenkins. This would use the latest commit to create a production bundle of our website and then deploy it. This pipeline didn't run automatically at first, but it didn't need to. My pair and I could complete a story, have it reviewed for design and functionality, and then send it to prod. If changes needed to be made, we could make them, review, and deploy.

    Single Pair w/Fix Commit History

    Unfortunately, this workflow makes the assumption that features can be reviewed as soon as they are finished. Rarely was this the case, so there would frequently be multiple stories prepared for review before they could be looked at.

    This was further complicated when stories were rejected as a part of the review process. Imagine a situation where Feature B was rejected, but Feature A and Feature C were both accepted.

    Single pair multiuple stories with status

    Our Jenkins pipeline was configured to send the most recent commit. That meant it would try to deploy Feature A, Feature B, and Feature C in this case. Since that meant shipping a rejected feature, we were forced to fix Feature B before we could ship two working features. There are plenty of things we could have done to solve this problem, but we chose to hold deployments until all stories were ready to go. Now, suppose we had Feature A, Feature B, Feature C, and Feature D in our backlog. Our product manager would arbitrarily pick a cutoff point, let's say Feature C. We would do all the work for Feature A, B, C, and then we would stop. Once all the stories were approved, our PM gave us the go-ahead to deploy. Then work could begin again.

    It wasn't the best solution, but the problem outlined above didn't happen to us often enough at the start for that to matter. Our designs were low-fidelity, and the functionality was simple. There was also only one pair of engineers actively committing code at the time.

    All at once, however, our designs became higher fidelity, our functionality less simple, and a second pair of engineers joined the endeavor. In the same timeframe we got more stories done, but had more rejections, which increased the size of our batch and the time between deployments.

    With those changes I made a proposal: we would do the work for a feature on a branch. Every branch would get it's own deployment for the designers and PM to use in their review. Once the story was accepted we merged it back into master. Every push to master would then trigger a production build.

    In the past, teammates of mine had been skeptical of using branches, so on those teams we often used the workflow I originally outlined for the Owner team. They had good reasons too. Branches often lived too long, which was a problem. When a branch lives for a long time, it is likely to include changes from many files across the codebase. Additionally, if the team is using a branching strategy there's likely multiple streams of work going on at the same time, probably touching the same code. When it comes time to merge them into master, there are going to be merge conflicts. A merge conflict is especially dangerous when you don't have the context around the code you're merging in, making it an easy place to accidentally remove a feature or introduce a bug.

    In my experience, a long-lived branch tends to grow out of a story with a large scope. The Owner team used "T-shirt sizes" for our stories, meaning a story could be small, medium, large, or extra-large. Branches that lived for a long time usually came from large and extra-large stories, so the simple solution was to break them down into small and medium sized stories.

    At first, the website allowed a user to enter their vehicle's year and model, and then they were sent to a page that displayed the relevant information. The next feature we wanted was to allow them to enter their VIN to get more specific information about their vehicle. Specifically, we wanted to display the recalls and field service actions (FSAs) for their vehicle. This started out as one user story written like so:

    As the owner of a vehicle
    I would like to be able to enter my VIN
    so that I can get specific information about my vehicle
    

    This story seems pretty straightforward. The flow could be imagined like this:

    1. The user enters their VIN on the homepage.
    2. They are redirected to /vehicle/vin/1234567890ABCDEFGH.
    3. Their data is shown to them.

    When we looked at how this flow could work from an engineering perspective, it gets more complicated:

    1. We would have to add an input for VIN on the homepage form, which includes an entire redesign of the form.
    2. We would have to create a component that gets rendered when the user hit the appropriate URL
    3. Upon landing on the VIN page, we would have to make a request to get the year-make-model of the given VIN.
    4. If that request failed, we would have to add an error state for an invalid VIN.
    5. We would use the year-make-model to get generic information for that vehicle.
    6. If this request failed (which usually meant the vehicle isn't supported in the user's region), we would have to create an error state to handle it.
    7. At the time, we could get VIN specific recall and field service action (FSA) information, so we would have to create a service to make the request, and add the tile to the page.

    The story as written would have been extra-large. "What stories are hiding in here?", we asked ourselves. We eventually broke it down like this:

    1. Navigating to /vehicle/vin/1234567890ABCDEFGH shows the vehicle's vin and all the default information for the corresponding year-make-model. If this request fails, just navigate to the 404 page for now.
    2. Update homepage to include form for entering VIN.
    3. Display the recall information.
    4. When the VIN decode fails, navigate to a page saying that an invalid VIN was given
    5. When the VIN decode succeeds, but the vehicle isn't supported in the current region, navigate to an error page describing that information for that vehicle isn't available

    One extra-large story became one medium story and four small stories.

    Not only do smaller stories make it so developers don't conflict as often, they allow us to parallelize our work. Of those five stories, the first one had to be done first. From there, the stories really could have been done in any order, and could be done at the same time without causing conflicts with the other developers.

    As we wrapped up our final retrospectives and reviews with our management, product owners, and stakeholders, we received praise for a number of things, but by far the loudest praise was for our ability to iterate and ship to production so quickly and so often. I said earlier that the solution was simple: "just write smaller stories." One person alone can't make this happen. The team wanted to put stories in production as soon as they were ready. For that to work, we needed a branch for each feature. For that to work, we needed branches to be merged back into master frequently. For that to work, we needed small stories.

    The technique succeeded for us where it had failed for others because the team put in place a system that supported its success.

  4. 2020-02-27

    I maintain this tool guet, and I've been doing a lot of refactoring as of late. Refactoring is a practice in software development where engineers go back and change the implementation of code to increase clarity why still maintaining the original functionality. In an effort to explain why refactoring is valuble not just for the engineers, but for the product team as a whole, I wanted to walk through an example of a recent refactor and its outcomes.

    The Problem

    When updating code we manage changes using a tool called git. Saving the current state of a codebase using git is called a commit, and one normally looks like this.

    commit bff1ad6cb29db56c624c82f8968058e82b292b99
    Author: chiptopher <chrisboyerdev@gmail.com>
    Date:   Thu Feb 27 07:28:49 2020 -0500
    
        Initial commit
    

    The author (line 2 of the commit) gets credit for the commit. However, on sites like Github you can include the the following lines to give more people credit:

        Co-authored-by: First Committer <first@test.com>
        Co-authored-by: Second Committer <second@test.com>
    

    There's been a bug in guet since almost its inception back in 2018. What guet it supposed to do is make pair programming contribution tracking easier. You can add people, set them as the committer, and then they should be tracked in the commit. guet should also rotate who the git author is between the set committers. The workflow would look something like this:

    guet add f "First Committer" first@test.com
    guet add s "Second Committer" second@test.com
    guet set f s
    git commit -m "Initial commit"
    git commit -m "Second commit"
    git log
    

    What a guet user would expect is on the first commit, "First Committer" would be the author. Then on the second commit the author would be "Second Committer." However, when looking at the actual commit logs one can see that the first committer is me ("chiptopher"). This is because I have myself globally set on my computer as the git author.

    commit 2dbc76b72dac7cbfc672328960ba26f638e9b083 (HEAD -> master)
    Author: Second Committer <second@test.com>
    Date:   Thu Feb 27 07:28:55 2020 -0500
    
        B
    
        Co-authored-by: Second Committer <second@test.com>
        Co-authored-by: First Committer <first@test.com>
    
    commit bff1ad6cb29db56c624c82f8968058e82b292b99
    Author: chiptopher <chrisboyerdev@gmail.com>
    Date:   Thu Feb 27 07:28:49 2020 -0500
    
        Initial commit
    
        Co-authored-by: First Committer <first@test.com>
        Co-authored-by: Second Committer <second@test.com>
    

    This error was originally logged in November of 2018, but it wasn't until recenlty that I found the reason when poking around in the git module of the code. There was a function in the code that interacted with git configure_git_author that would set the name and email of the current git author. The code that ran when you use the guet set command never calls configure_git_author. However, the code than runs after a commit did call configure_git_author.

    Now, for sure the simplest solution to fixing this bug would have been to call the configure_git_author method in the guet set command code, but there was a refactor that I wanted to play which I hoped would aleviate this kind of problem in the future. Enter the observer pattern.

    Observer Pattern

    I'm not going to get too into the weeds of what an observer pattern is. The formal definition given for it by the Gang of Four was

    Define a one to many dependency between objects so that when one object changes state, all its dependents are notified and update automatically.

    And this is roughly the diagram for how I implemented it in guet.

    Essentially, when you have certain kind of state changes in your system that can can cause a ripple effect, an observer pattern can help define how to listen for those state changes.

    Implementation

    In my case, the state change was the currently set committers. Every time the currently set committers changes, there are three updates I have to coordinate.

    1. Update the guet configuration for who the current committers are
    2. Update the guet configuration for who the current author is
    3. Update the git configuration for who the current author is

    Each change has a a function associated with it: configure_git_author, set_committers, and set_committer_as_author. In order to make sure each of those things happens every time the current committers change, I have to remember to call each function.

    With the observer pattern, I implement a contract that says when the committers are changed all off of the observers will be notified. The observed object in my implementation was a Context with a set_committers method. The Git module becomes an observer of the Context. Now, whenever Context is used to update the committers, every module that is observing it will handle the new committers.

    How This Helps

    Were I to put the configure_git_author call in the guet set command code that would have solved the problem. Including testing, it might have taken twenty minutes. My observer solution took around two hours to complete. This might seem like an expensive price to pay for not gaining much, but results in code that's easier to work with.

    Since a process exists that codifies this relationship it will be easier to augment this behavior with future features, and reduces the number of places where consquences of a change in committers exists to one. Every place we have duplicated logic is another place we have to make a change for one feature. To get the git author to update when committers change, we would have to do it once in guet set and once in the post-commit processor. And keep in mind that we have to write a test for each. That solution wouldn't scale in the long run.

    The new implementation is clearner, easier to test, and easier to work with. Refactoring is a neccessary part of codebase maintenance, and goes a long way towards speeding up later development.

  5. 2020-02-25

    As a part of our team's Lunch & Learn series, I have a presentation on code quality, why it matters, and why it's a cross-functionaly concern.

    Given in person in Ann Arbor.

  6. 2018-09-13

    Shippensburg University has no rivers, no lakes, and no watercraft as far as I could tell. It’s a landlocked University bordering farmland on all sides, with less than seven thousand students. This place, that we affectionately call Ship, is where I came from to be a software engineer for Ford Labs.

    I graduated from Ship with a bachelor’s degree in software engineering, which was one of the three engineering degrees offered at Ship at the time. In fact, of all of the Pennsylvania State System of Higher Education schools, Ship is the only one to offer any engineering degrees. In 2012, software engineering was launched as its own degree instead of an offshoot of computer science, and in 2015 it became ABET accredited. That makes us one of only 28 ABET accredited software engineering programs in the country. Our program focuses on concepts that one would need to be a good engineer—things like code quality, design patterns, and agile methodologies—rather than the typical data structures and algorithms one usually covers in a computer science degree. There is a practice over theory focus that I feel really helps me shine in my work.

    I could talk about my time in the Computer Science and Engineering department at Ship for hours. We worked with a lot of cool partners, worked on many interesting projects, and had fun along the way. I have a lot of pride for my alma mater, and because we have a fledgling engineering program, I think it’s important to give back any way I can. So when the alumni team reached out about doing a piece on me to highlight the program before the announcement of their School of Engineering, I was happy to oblige.

    Click here to see the video!

    Picture of Chirs with friends at college graduation