Highlights

A feed of passages I've highlighted from articles across the web.

Subscribe via RSS

“Research shows that the more we write about what we are doing and thinking, the clearer our ideas become. Repeated writing on a theme allows for the development of abstract ideas and complex relationships. Furthermore, when we return to earlier entries in our journals, we may discover we are able to answer a question, or we may suddenly understand the importance of a certain thought to the development of our work.” (RMIT Study and Learning Centre, 2012)

The scholars at UTS suggest to structure your reflective writing in three stages: Description: ‘What?’ Analysis: ‘So what?’ Synthesis: ‘Now what?

the general rule holds—the background is generally brighter than the objects in a scene, and so the human brain becomes much more used to dark objects against light backgrounds and thus prefers them.

In the scientific literature, black on white is called “positive polarity,” whereas white on black is called “negative polarity.” Numerous studies over decades of research have found that positive polarity displays provide improved performance in a variety of areas.

a positive polarity advantage has been found in error rates and reading speed in a letter identification task (Bauer and Cavonius 1980), the number of transcribed letters onto paper (Radl 1980), subjective ratings on visual comfort (Saito, Taptagaporn, and Salvendy 1993; Taptagaporn and Saito 1990, 1993), text comprehension (A. H. Wang, Fang, and Chen 2003), reading speed (Chan and Lee 2005) and proofreading performance (Buchner and Baumgartner 2007).

In another paper in Ergonomics, Buchner and Baumgartner showed that the benefits of positive polarity displays were independent of ambient light when they compared results of the same experiment run in a darkened room versus one with typical office lighting. (Nor did chromaticity—blue and yellow as opposed to black and white—make a difference.

if a belief is useful to you in achieving your goals, keep it. Otherwise, discard it. An extension of this is, per Graham, if you have to pick between two theories and one is less useful than the other, pick the more useful one regardless of the truth. I believe effective, successful people do some variant of this. Unsuccessful people don’t.

> *design is different. As a designer, I don't matter. My work doesn't matter. Nothing I make matters in the context of my process. It's all about the people you are building for. You're just trying to solve problems for people. Once you realize this, it's the most liberating thing.* Why is this mindset tweak useful? Well, if you’re able to see the goal of your work as ‘solving problems’, then criticism of your work becomes less grating, because it’s no longer about you.

But implicit in optimising for usefulness is the idea that you should *continually reevaluate* your heuristics

Optimising for usefulness consists of four aspects: first, that mindset hacks exist, and that the good ones help you become more effective at life. Second, that we may invert this idea: if you possess a mindset that hinders you from achieving your goals, aggressively look for a better one. Third, ‘optimising for usefulness’ also works when it comes to learning from experience: when something happens to you, prioritise learning the lessons with higher usefulness first. Verify that they work through trial and error. And fourth, prevent yourself from leaping to obvious (or convenient!) but ultimately less useful conclusions.

What I’d like to propose is that self-awareness is the most important design skill, not because it’s a fundamental skill like many of those listed above, but because it unlocks the traits that lead to great teams and great design.

What that means is that if I think I’m self-aware, I need someone — my boss, a colleague, a friend, my husband, *someone* — to corroborate with me. In the study that Dr. Eurich and her team ran, the data showed that while 95% of folks surveyed thought they were self-aware, only 10% to 15% actually were. That’s pretty shocking, especially if we’re wondering if we’d be one of those ten.

The reason we’re assessing ourselves incorrectly is that we’re relying on introspection to evaluate ourselves, and the way we introspect is faulty.

The problem with introspection is that we’re asking “why” to understand ourselves and the world around us, and in doing so it’s incredibly easy to invent answers that feel true but are often very wrong. Things are hidden from our view, or often from our consciousness. We can’t self-evaluate accurately without adding our own bias, our own worldview, and our own “truth.” And these things aren’t facts. **People who are self-aware ask what instead of why**

Let’s put this in a really practical example. Let’s say I got in a fight with my husband — and this is sheerly hypothetical, because obviously we never fight — and I’m reflecting on what went wrong. I can choose one of two routes. One route is I can ask why. *Why* did I respond in that way? *Why* is he being like this? *Why* do we keep getting in the same argument? The other route involves asking what. *What* happened that caused the argument? *What* did I do to escalate it? *What* could either of us do differently next time?

Now you can see the difference between a *why* and a *what* for introspection. *Whats* move us forward. *Whys* move us backward.

Except that it turned out most of these interventions didn’t work by decreasing cognitive biases or increasing detection of subtle signals; they worked by tamping down on noise.

![](https://commoncog.com/blog/content/images/2021/02/Screenshot-2021-01-27-at-1.45.34-AM.png)

> When people consider errors in judgment and decision making, they most likely think of social biases like the stereotyping of minorities or of cognitive biases such as overconfidence and unfounded optimism. The useless variability that we call noise is a different type of error. To appreciate the distinction, think of your bathroom scale. We would say that the scale is biased if its readings are generally either too high or too low. If your weight appears to depend on where you happen to place your feet, the scale is noisy. A scale that consistently underestimates true weight by exactly four pounds is seriously biased but free of noise. A scale that gives two different readings when you step on it twice is noisy. Many errors of measurement arise from a combination of bias and noise. Most inexpensive bathroom scales are somewhat biased and quite noisy.

One obvious takeaway is that you’re probably going to get better decision making results if you target noise than you would if you targeted cognitive biases. One of the things that we’ve talked around, but not directly discussed, is this idea that fighting cognitive biases is just plain *hard*, and that you should be sceptical of most interventions designed to combat it.

Morse offers another source of reminder: the designs we create for the web -- and for most software, too-- are not likely to last forever. So... > *Don't fall in love with borders, gradients, a shade of blue, text on blurred photos, fancy animations, a certain typeface, flash, or music that autoplays. Just get attached to solving problems for people.* That last sentence is pretty good advice for programmers and designers alike. If we detach ourselves from our specific work output a bit and instead attach ourselves to solving problems for other people, we'll be able to handle their critiques more calmly. As a result, we are also likely to do better work.

In fact I suspect if you had the sixteen year old Shakespeare or Einstein in school with you, they'd seem impressive, but not totally unlike your other friends. Which is an uncomfortable thought. If they were just like us, then they had to work very hard to do what they did. And that's one reason we like to believe in genius. It gives us an excuse for being lazy. If these guys were able to do what they did only because of some magic Shakespeareness or Einsteinness, then it's not our fault if we can't do something as good.

I think the solution is to work in the other direction. Instead of working back from a goal, work forward from promising situations. This is what most successful people actually do anyway.

It's not so important what you work on, so long as you're not wasting your time. Work on things that interest you and increase your options, and worry later about which you'll take.

If you want to do good work, what you need is a great curiosity about a promising question.

That's what you need to do: find a question that makes the world interesting. People who do great things look at the same world everyone else does, but notice some odd detail that's compellingly mysterious.

The way to get a big idea to appear in your head is not to hunt for big ideas, but to put in a lot of time on work that interests you, and in the process keep your mind open enough that a big idea can take roost. Einstein, Ford, and Beckenbauer all used this recipe. They all knew their work like a piano player knows the keys. So when something seemed amiss to them, they had the confidence to notice it.

Much later, I discovered an old nut by investors Warren Buffett and Charlie Munger, which they named the Noah Rule (after the [biblical story](https://en.wikipedia.org/wiki/Noah)): ‘predicting rain doesn’t count, building an ark does’.

It doesn’t matter if Singapore is geopolitically challenged, or that it faces new challenges in the 2020s or that VC money is skipping SG entirely and flowing into Indonesia. What matters for these students are questions at a lower level of abstraction: “What are my career goals, and which companies may I work at to advance towards those goals?” along with “What are the market/regulatory/economic shifts that are *affecting those companies directly*?” These questions operate at the right level of abstraction. Commentary about macro-economic trends do not.

Similarly, this is why the literature around habit-formation should probably be [tossed out](https://commoncog.com/blog/the-power-of-habit/) when talking about organisational culture or societal behaviour. Just because a set of ideas works for one level of a system doesn’t automatically make it relevant to the level of organisation above it. To make this more concrete: policymakers who think that the habit loop is sufficient for initiating behaviour change at the level of a group would likely be burned by [Goodhart’s Law](https://en.wikipedia.org/wiki/Goodhart%27s_law), or be taken by surprise by all the second and third order effects that ripple throughout the niches in a complex adaptive system. They need a different set of ideas to effect behavioural change at higher levels of organisation — ideas like incentive system design, not individual habit formation.

The first implication we have already covered in this essay: be suspicious of ideas that are taken from one level of a complex adaptive system and are then applied to a higher (or lower) level. It is likely that they aren’t as useful as you might imagine them to be.

The second idea is a heuristic that I attribute to my friend, [Jin](https://twitter.com/jin_): “when reasoning about your career, think one level above, or one level below.” Or, as he puts it more succinctly: “think ‘plus or minus one!’”

The reason I’ve spent an entire essay arguing that we should ‘seek ideas at the right level of abstraction’ is because I think that the opposite habit — ‘use high-level analyses as a justification for our actions’— is a particularly pernicious trap for smart, analytical people. We do this because it’s a narrative stereotype: we think that geniuses must extrapolate from high-level analyses to individual action, and therefore we should do the same.

In simpler terms, ‘strong opinions, weakly held’ sometimes becomes a license to hold on to a bad opinion strongly, with downside protection, against the spirit and intent of Saffo’s original framework.

if an idea has certain affordances, and people seem to always grab onto those affordances and abuse the idea in the exact same ways, then *perhaps you shouldn’t use the idea in the first place*

Both forms of failure stem from the same tension. It’s easy to have strong opinions and hold on to them strongly. It’s easy to have weak opinions and hold on to them weakly. But it is quite difficult for the human mind to vacillate between one strong opinion to another.

• When forming an opinion, phrase it in a way that is very clear, and may be verified by a particular date. • Then state the probability you are confident that it is correct.

The next time you find yourself making a judgment, don’t invoke ‘strong opinions, weakly held’. Instead, ask: “how much are you willing to bet on that?” Doing so will jolt people into the types of thinking you want to encourage.

As technology advances, software will increasingly be chosen not just for how well it addresses its use case, but how it conveys its personality, similar to how we choose our clothes. We’re already beginning to see this shift. In highly individualized spheres like note-taking tools and consumer crypto, software is often chosen based on identity.

By designing everything on the screen to be as photorealistic as possible, the software essentially negated needing to define its style or stance — it merely reflected the world outside of it.

By appealing to *everyone* with these clean lines, gradients, and rounded shapes, no one feels particularly compelled by them. And while these brands serve their purpose in attracting the “right” demographic to their software, they also make it difficult to form a meaningful connection with the user — the product feels like a stylistic blank slate.

Style is not just an eye-catching color or a fun skeuomorphic icon. Style is the indescribable quality that sums up how interacting with something makes a person *feel*. It isn’t simply sprinkling bits of color or animations on top of enterprise software to move metrics. It’s the feeling the user gets when everything about a product feels specific to a certain personality that they can identify and relate to.

Style matters to our economy, our society, and to each of us personally — it allows us to communicate who we are and who we want to be. While it is widely understood that style is the driving force behind [consumer purchasing decisions](https://www.scitechnol.com/peer-review/consumer-purchase-decision-making-process-based-on-the-traditional-clothing-shopping-form-mZst.php?article_id=6373) in things like clothing, I’d argue that the same has become true in technology and software as well.

“make sure you’re playing the real game, not some more complicated game you’ve made up for yourself.”

Musicians always learn by playing existing songs. For some reason, in visual design, we are expected to just produce tons of original content out of nowhere. It’s not a productive attitude: to learn the most, you should study work that really resonates with you and absorb it by trying to reproduce it. No shame in that!

This is a specific instance of a larger problem: most people are unable to finish their side projects or focus on their side hustles, because they get distracted and sidetracked by tinkering and other things that increase the complexity of the project, instead of working toward the original goal.

The internet doesn’t require you to introduce yourself, to write a conclusion that begins “In Conclusion…”, or to conspicuously avoid the passive voice. What matters is whether you’ve communicated your message and your style.

Here's a simple trick for getting more people to read what you write: write in spoken language.

It seems to be hard for most people to write in spoken language. So perhaps the best solution is to write your first draft the way you usually would, then afterward look at each sentence and ask "Is this the way I'd say this if I were talking to a friend?"

Extended cognition takes the idea that your mind is ‘on’ your smartphone literally. It says that human cognitive states and processes sometimes spill outside our heads and into objects in our environment. Alleged examples include not just smartphones, but also use of simpler technology (pencil and paper to perform a calculation), our own body (ticking off our fingers when we count), and other people (our spouse who remembers appointments so we don’t have to).

Even during relatively undemanding tasks – e.g. copying a simple coloured pattern made of puzzle pieces – we off-load information processing onto the environment to reduce work for our brains (Ballard et al. 1997). Once one recognises this, one sees it everywhere: a bartender lines up cocktail glasses of different shapes to remember a complex order; a mathematician uses pencil and paper to guide their steps in a calculation; a child uses their fingers to count off the days until their next birthday. These observations reveal that intelligent, adaptive human thought and behaviour need not always be produced by the brain alone. It often involves a two-way interaction between the brain, body, and world (Dennett 1996; Hutchins 1995; Simon 1969).

The hypothesis of extended cognition (HEC) goes beyond this relatively uncontroversial observation in a controversial way. Environmental processes don’t merely interact with our brain to produce thought and behaviour. Those environmental processes have as much claim to be mental or cognitive as their brain-based collaborators.

individuals often lack global knowledge of the behaviors of others and must estimate them from the observations of their friends' behaviors. In some cases, the structure of the underlying social network can dramatically skew an individual's local observations, making a behavior appear far more common locally than it is globally.

As a result of this paradox, a behavior that is globally rare may be systematically overrepresented in the local neighborhoods of many people, i.e., among their friends.

it's the totality of those “nodal points” that indicate one’s own unique perspective. It doesn’t matter if you specifically sought out the nodal point or not, it’s the recognition that counts. When you encounter a piece of life-changing information (no matter how large the change part is), you are simultaneously discovering and creating “yourself,” becoming incrementally more complete. Your perspective (where your gaze is directed) is made up of a meandering line through these points. Learning (or maybe some precursor to learning) is a lot about developing the intuition to recognize when something you find in the world is going to be a nodal point for you.

For mature thought there is no mechanical substitute. But creative thought and essentially repetitive thought are very different things. For the latter there are, and may be, powerful mechanical aids.

Qualitative usability testing aims to identify issues in an interface, while quantitative usability testing is meant to provide metrics that capture the behavior of your whole user population.

Quantitative usability studies are usually summative in nature: their goal is to measure the usability of a system

In contrast, qualitative user studies are mostly formative: their goal is to figure out what doesn’t work in a design, fix it, and then move on with a new, better version.

In comes Jakob Nielsen’s article that recommends qualitative testing with 5 users. There are three main assumptions behind that recommendation: That you are trying to identify issues in a design. By definition, an issue is some usability problem that the user experiences while using the design. That any issue that somebody encounters is a valid one worth fixing. To make an analogy for this assumption: if one person falls into a pothole, you know you need to fix it. You don’t need 100 people to fall into it to decide it needs fixing. That the probability of someone encountering an issue is 31%

In qualitative studies, you’re simply counting usability issues. And, while there is a statistical uncertainty about any number obtained from a quantitative study (how will the average obtained from my study compare with the average of the general population), there is absolutely no uncertainty in a qualitative study — any error identified is a legit problem that needs to be fixed.

The garden helps us move away from time-bound streams and into contextual knowledge spaces.

It's low friction. Gardening your thoughts becomes a daily ritual that only takes a small amount of effort. Over time, big things grow.

You're freed from the pressure to get everything right immediately. You can test ideas, get feedback, and revise your opinions like a good internet citizen.

This freedom of course comes with great responsibility. Publishing imperfect and early ideas requires that we make the status of our notes clear to readers. You should include some indicator of how "done" they are, and how much effort you've invested in them.

Typical software systems are over-formal. They insist on finality, though their users usually think in terms of relative preferences, contingent on context.

If I were running a startup, this would be the phrase I'd tape to the mirror. "Make something people want" is the destination, but "Be relentlessly resourceful" is how you get there.

#1: Visibility of system status The design should always keep users informed about what is going on, through appropriate feedback within a reasonable amount of time.

#2: Match between system and the real world The design should speak the users' language. Use words, phrases, and concepts familiar to the user, rather than internal jargon.

#3: User control and freedom Users often perform actions by mistake. They need a clearly marked "emergency exit"

Good error messages are important, but the best designs carefully prevent problems from occurring in the first place. Either eliminate error-prone conditions, or check for them and present users with a confirmation option before they commit to the action.

#6: Recognition rather than recall Minimize the user's memory load by making elements, actions, and options visible. The user should not have to remember information from one part of the interface to another.

Every extra unit of information in an interface competes with the relevant units of information and diminishes their relative visibility.

Well, if you write the way I do, which means that you start something and you rewrite it, especially the beginning part, you rewrite it time and time and time again, at some point you feel, not that it’s beyond repair, but that there’s nothing more you can do to fix it. In other words, as far as you’re concerned, that’s what you were going to write.

All error messages must be:

written in plain language that is easy to understand for non-technical users and that does not imply that the mistake is the user's fault

precise in specifying exactly what was done wrong (that is, not be generic or vague)

constructive in suggesting steps the user can take to correct the problem

Hierarchical structures are usually forced and artificial. Interwingularity is not generally acknowledged — people think they can make things hierarchical, categorizable and sequential when they can't.

💎 Free associations are closer to how brain works. Thoughts are not neatly packed in file cabinets: rather one leads to another and yet another.

We must remember quantitative data is only relevant to the actions currently available and therefore has the potential to limit our thinking.

people set higher standards of evidence for hypotheses that go against their current expectations. We tend to search for, interpret, favor, and recall information in a way that confirms or supports our prior beliefs or values.

“pay attention to what users do, not what they say”. User actions and feedback is seldomly aligned, and this can introduce inaccurate data in the design process if we rely too much on user feedback alone.

False consensus effect A pervasive cognitive bias that causes people to see their own behavioral choices and judgments as relatively common and appropriate to existing circumstances.

Overly specific feedback can frame up the problem in a way that limits the design solution, or influences it in a way that leads to overlooking other important considerations.

Meadows defines a system as “an interconnected set of elements that is coherently organized in a way that achieves something.”

Meadows defines interconnections as the “relationships that hold the elements together.”

I’d suggest that the interconnections in a design system include design constraints, the feedback loop between implements of the design system and the system’s consumers, the cadence at which implementers update documentation, the amount of money the company invests in the design system, and the way that the component library reads design tokens.

Meadows points out that “purposes are deduced from behavior, not from rhetoric or stated goals,” meaning that the only real way to understand a system’s purpose is by watching its behavior. As implementers of design systems, we often try to dictate the system’s purpose. We might state goals like: “the design system will improve design consistency.” Unfortunately, saying it doesn’t make it true. We need to make sure that we understand the different elements and interconnections in our system. Modifying the way elements are interconnected can allow us to adjust the purpose of the system — but we need to make sure that we understand it first.

the way HSL calculates lightness is flawed. What most color spaces don’t take into account is that different hues are inherently perceived as different levels of lightness by the human eye—at the same level of mathematical lightness, yellow appears lighter than blue.

Perceptually uniform color spaces model colors based on factors that relate more to human vision, and perform sophisticated color transformations to ensure that these dimensions reflect how human vision works.

Tiny Wins are often shortcuts. They save a user’s time by getting rid of existing steps — physical or mental — required to perform an action. This is a really useful way to think about the types of changes we saw above, and a good way to differentiate them from other low-hanging fruits that don’t belong on your lists.

Tiny Wins are high impact. They affect things that the majority of users interact with on a regular basis. If the change won’t have the compounding effects we’ve discussed, it doesn’t belong on the list.

Tiny Wins are low effort. These projects are straightforward, scoped, and takes a short amount of time. If the change requires a significant amount of time and effort, it doesn’t belong on the list.

Tiny Wins are standalone. These changes are small, scoped, and provide their own value. If the change can’t be appreciated on its own, it doesn’t belong on the list.

I noticed something weird about the issues we solved with these changes. They were almost never reported. Hundreds of people were ecstatic when we added that arrow to PR pages. Out of those, not a single one indicated that this flow was confusing. A lot of people assumed it was their own fault for not just “getting” it. Others get so accustomed to these flows that they don’t even notice their anxiety. If they do, it’s just part of life. The status quo. Something to live with, not improve.

MVPs and iteration are powerful tools that should be leveraged by companies looking to move quickly. But Tiny Wins are much more potent when it comes to filling the gaps, improving retention, and nurturing your community of users.

Because of this, these changes were perceived and acknowledged as fresh, complete features. They communicated to users that they were being listened to. These features bred excitement, goodwill, and likely loyalty towards their respective companies. Hell, they probably even contributed to some organic growth.

“You’re efficient when you do something with minimum waste. And you’re effective when you’re doing the right something.”

DeMarco defines slack as “the degree of freedom required to effect change. Slack is the natural enemy of efficiency and efficiency is the natural enemy of slack.” Elsewhere, he writes: “Slack represents operational capacity sacrificed in the interests of long-term health.”

Having a little bit of wiggle room allows us to respond to changing circumstances, to experiment, and to do things that might not work.

The irony is that we achieve far more in the long run when we have slack. We are more productive when we don’t try to be productive all the time.

Decision fatigue can lead to poor choices and irrational trade-offs in decision making. It refers to the deteriorating quality of decisions after a long session of decision making. Too many decisions end up depleting our willpower, to the point that we end up making increasingly poor choices.

Impaired ability to make trade-offs. Trade-offs feature several choices that have positive and negative elements. They are a particularly energy-consuming form of decision making. When we are faced with too many trade-offs to consider, we end up mentally depleted, and we make poor choices.

The best way to avoid decision fatigue is by mindfully directing your mental energy towards choices that matter, and minimizing low-stakes energy-consuming decisions.

Some calculations even suggest that not using keyboard shortcuts could make you lose as much as eight days per year, which is equal to more than 3% of your total productivity.

A personal site offers a dedicated place to experiment across the entire tech-stack; not a deliverable for a client that is handed over and then never touched by me again. A personal site is a place to try out that new API, see what can be done with CSS, truly discover what the Web can be.

Tacit knowledge is knowledge that cannot be captured through words alone. Think about riding a bicycle. Riding a bicycle is impossible to teach through descriptions. Sure, you can try to explain what it is you’re doing when you’re cycling, but this isn’t going to be of much help when you’re teaching a kid and they fall into the drain while you’re telling them to “BALANCE! JUST IMAGINE YOU ARE ON A TIGHTROPE AND BALANCE!”.

tacit knowledge does exist, and understanding that it does exist is one of the most useful things you can have happen to you.

People with expertise in any sufficiently complicated domain will always explain their expertise with things like: “Well, do X. Except when you see Y, then do Z, because A. And if you see B, then do P. But if you see A and C but not B, then do Q, because reason D. And then there are weird situations where you do Z but then see thing C emerge, then you should switch to Q.” And if you push further, eventually they might say “Ahh, it just feels right. Do it long enough and it’ll feel right to you too.”

I explained that deliberate practice is defined as possible only in domains with a long history of well-established pedagogy. In other words, deliberate practice can only exist in fields like music and math and chess.

The process of learning tacit knowledge looks something like the following: you find a master, you work under them for a few years, and you learn the ropes through emulation, feedback, and osmosis — not through deliberate practice.

The fewer features in any given release, the more measurable the impact.

Key sections to include in your design spec: Vision. Why is this feature important? Why is this feature a priority? Objectives and key results. What is required for this feature to be complete and what metrics should be measured for success? Deliverables and release plan. What user interfaces are being shipped and when? Does it require changes to on-boarding, documentation, billing, and/or marketing? Dependencies. What other non-existing features or functionality is required before it can be released? Open questions. What needs more clarification?

Attach personas to task flows and wireframes

Creativity is the ability to imagine something that isn’t there — yet. Its core ingredient are ideas. An idea is a new thought that springs up from existing knowledge. Thoughts collide — something “clicks” in our mind.

If you want more ideas, it helps to have more knowledge. We first immerse ourselves in a topic and learn about a subject as much as possible. The more thoughts we have on what is, the likelier it is that some of them combine to an image of what could be.

Text is only present while you are reading it, it is a structure in time. Images are structures in space. They are always right there and can be recognised immediately. These anchors in space amidst the flow of words in time help us remember what the text is about.

Words are conventions, abstract placeholders. Unconventional drawings on the other hand are more specific. They form a bridge between the actual experience of a tree and the conventional word.

Choosing the level of information granularity that is right for the audience will prevent wasted effort and make the information easily digestible.

Less is more: don’t document every single detail or meeting, but just enough to clarify intent or direction, provide a recap, justify a decision, or outline how something should work.

Don’t wait until the last minute to document details; do it along the way as you’re researching and designing to challenge your own biases and rationale, keep your project manageable, and produce better outcomes.

Lightweight documentation provides context, creates shared understanding, and gives the team something to refer back to when questions arise. For this type of documentation: Include a plan of what you will do. Keep it short — use a one-page bulleted plan or summary. Communicate how whatever you will do will impact the team. Outline who’s responsible for objectives or artifacts. Include dates to timebox input and the effort.

When the right information is captured at the right level of detail and in the right places, it’s not wasteful. Good documentation leads to better and faster decision making, aids in presenting and justifying design decisions, and reduces the cognitive load for the team by acting as a form of external memory.

Taxonomies may be thought of as hierarchies of categories to group and organize information to be found when browsing, or as a structured set of terms used to tag content so that it can be retrieved efficiently and accurately. Sometimes the same taxonomy may serve both purposes, and sometimes two different taxonomies are used, one for each purpose, for the same content or site.

Understanding the users and their tasks and needs is a foundation for all things UX. Taxonomy building is not any different. …Who are the users? What are they trying to do? How do they currently tackle this problem? What works and what doesn’t?

Understanding the users is of central importance, so let’s consider specifically two techniques we can use to make a taxonomy more suitable for its users: (1) adapting the names or labels of the taxonomy concepts (terms) to the language of the users, and (2) adapting the categorization hierarchy to the expectation of the users.

Fortunately, taxonomies are used to describe digital space so there is flexibility. While a physical object can exist in only one place in a kitchen, a library shelf, or a store shelf, the same taxonomy concept representing an idea may exist in more than one place in a taxonomy hierarchy.

Because taxonomies are based on the content they are tagged to, taxonomy creators may fall into the trap of exclusively focusing on making the taxonomy reflect the content without also considering the need of making the taxonomy suitable for its users.

When I encounter inexperienced designers, there’s one thing that always sticks out about the way they think. Or more specifically, the way they don’t think. They don’t think of every design project as a design system. They don’t appreciate the interconnectedness of their design decisions. They solve design problems in isolation, not as a whole.

Systems thinking is the ability or skill to perform problem-solving in complex systems. In application, it has been defined as both a skill and an awareness. A system is an entity with interrelated and interdependent parts; it is defined by its boundaries and is more than the sum of its parts. Changing one part of the system affects other parts and the whole system, with predictable patterns of behaviour.

Systems thinking replaces Design Thinking’s reductionism (the belief that everything can be reduced to individual parts) with: Expansionism (the belief that a system is always a sub-system of some larger system) Analysis (gaining knowledge of the system by understanding its parts) Synthesis (explaining its role in the larger system of which it is a part)

What system thinking really means in design is this: every decision you make affects other past and future decisions. Every pixel you draw is bound by rules governed by your previous choices. When you make a change — no matter how big or small — it may reverberate through your entire design and require other updates to keep the coherence of the system intact.

When you see a design that looks messy and inconsistent, it’s the result of a lack of system thinking. When your design breaks when it’s stretched beyond the happy path, your incomplete system thinking is to blame.

Working atomically and making things reusable early and often — this automates system-wide changes and helps you avoid rework and unintended consequences.

Defining the basics — such as type, colour, spacing— before you get too stuck into other details.

Respecting the butterfly effect. Is a decision in isolation, or will its impact reverberate across other areas of your system? Never make a design decision without first understanding the scope of its consequences.

Empowering yourself (or others) for exploration. A system that’s too prescriptive will stifle creativity. Systems need to evolve organically over time.

Documenting your system. This can be a laborious task, but if anyone other than yourself might collaborate with your designs (including front/back-end dev), you need to document the decisions you make so they can understand the system you’ve created.

Knowledge is not an accumulation of facts, nor is it even a set of facts and their relations. Facts are only rendered meaningful within narratives, and the single-page document is a format very conducive to narrative structure.

On the other hand, the notion of the “document” that is intrinsic to web development today is overdetermined by the legacy of print media. The web document is a static, finished artifact that does not bring in dynamic data. This is strange because it lives on a medium that is alive, networked, and dynamic, a medium which we increasingly understand more as a space than a thing.

richer. A lot of the ideas I talk about in various pieces of writing are connected to one another. When I publish an essay, I’m not done with it. The ideas live on and get renewed, reused, and recycled in later works. Some sentences contain definitions that are core to my mental models, and there are whole paragraphs that might be useful out of context.

UIs that are pretty and friendly are nice to have, but the true values of a good interface should be speed and efficiency to make their users as productive as possible.

When you rock the boat, there will be waves. After you introduce a new feature, change a policy, or remove something, knee-jerk reactions, often negative, will pour in. Resist the urge to panic or rapidly change things in response. Passions flare in the beginning. But if you ride out this initial 24-48 hour period, things will usually settle down. Most people respond before they’ve really dug in and used whatever you’ve added (or gotten along with what you’ve removed). So sit back, take it all in, and don’t make a move until some time has passed. Then you’ll be able to offer a more reasoned response.

• If something fails where you thought it would fail, that is not a failure.

• A multitude of bad ideas is necessary for one good idea.

• Writing down one thing you are grateful for each day is the cheapest possible therapy ever.

• It is much easier to change how you think by changing your behavior, than it is to change your behavior by changing how you think. Act out the change you seek.

• You can reduce the annoyance of someone’s stupid belief by increasing your understanding of why they believe it.

• Pros are just amateurs who know how to gracefully recover from their mistakes.

• The more you are interested in others, the more interesting they find you. To be interesting, be interested.

• To make something good, just do it. To make something great, just re-do it, re-do it, re-do it. The secret to making fine things is in remaking them.

• You can obsess about serving your customers/audience/clients, or you can obsess about beating the competition. Both work, but of the two, obsessing about your customers will take you further.

• Separate the processes of creation from improving. You can’t write and edit, or sculpt and polish, or make and analyze at the same time. If you do, the editor stops the creator. While you invent, don’t select. While you sketch, don’t inspect. While you write the first draft, don’t reflect. At the start, the creator mind must be unleashed from judgement.

• Be prepared: When you are 90% done any large project (a house, a film, an event, an app) the rest of the myriad details will take a second 90% to complete.

In The Timeless Way of Building (1979), Christopher Alexander argues for the counterintuitive proposition that feeling (in the sense of perceiving the beauty and “life” of a space), unlike ideas or opinions, is quite objective; there is an astounding level of agreement between people about how different environments and buildings feel, though there may be little agreement of opinions or ideas about them in general.

centers are zones, sets of points in space, that have wholeness or coherence. They are things that have a particular “fit” with human perception and cognition, such that they are easy to perceive, remember, and describe as wholes. An apple is a center, its stem is a center, a spot on the apple’s surface is a center, perhaps its core is a center, but a random square centimeter of flesh within the apple is not a center.

Good centers also work together: centers of solid matter create coherent space between them, and centers together form a coherent whole. A few rules or features of centers (the following list is a quotation from p. 83): The sets which appear as entities are often locally symmetrical – but not always. The entities are usually bounded: that is, at their edge, there is often a sharp change of structure. Some of the entities are marked by an internal center where there is another change of continuity near the middle of the center itself. There is a simplicity and regularity about these sets which marks them as wholes, and makes them function as entities. They are often relatively homogenous across their interior, compared with the surrounding spaces. There is a topological connectivity in them which marks them as compact. They are usually – not always – convex.

All the patterns work together to create larger patterns, with the circles defining zones of space between them, the stitching giving the quilt rich texture and a perceptible axis, and the wide, well-proportioned border strongly emphasizing the shape.

Put another way, what use is a pantry full of ingredients if you don’t know how to combine them in a way that makes an appetising meal?

Each pattern’s documentation is preceded with a list of other patterns that employ the upcoming pattern Each pattern’s documentation is followed by a list of other patterns that are required for this pattern What this leads to is a choose-your-own adventure book that allows you to select a particular pattern and get all the information you need about what work must be done to put the pattern to use, as well as what other features you unlock by employing the pattern.

One of the most appealing things about defining a pattern language or design system in this way is the flexibility that it affords. For novice builders who just need a high-level component or concept of their project, they can find it at the top of the heirarchy and use the component as it comes. For those pioneering new experiences or experimenting beyond the confines of existing produce surface, they can take those high-level components and concepts and deconstruct them into their composite parts; they can keep doing this until they have the primitive, atomic components they need to construct a bespoke pattern that still fits with harmony in the wider system.

giving designers and engineers an atomic toolkit and leaving them to it is akin to asking them to create the universe when they just want to create a log in screen.

You can think of our system so far as a tree, with its most primitive elements at the bottom, comprising our atomic elements. These atoms—colors, spacing units, typography, and even dimensions of time, as in animations and transitions—are composed into components, and further composed to create patterns, which are common solutions to common problems.

I’d like to introduce a top-level element of the tree of our design system; the concept. Think of the concept as the definition of the environment, audience, platform, and purpose of a system, with the ultimate goal of narrowing the relevant patterns and components available to only the sensible ones.

The pattern domain is the fixed, middle piece. It describes the patterns and components that are utilized in order to make the story a reality. This can be thought of as the “execution,” or in linguistic terms, words and sentences. These elements stay roughly the same no matter what the concept is, but their frequency of use and degree of usefulness varies with different concepts.

The final domain is the visual domain, and like the conceptual domain, it changes between products, contexts, and platforms. It describes the primitive pieces that comprise the patterns and components—colors, text, and other “atomic” properties that make a brand or product distinct. This can be thought of as the “expression,” or in linguistic terms, the alphabet.

How do you design interactive systems that are easy to understand? Well, it all comes down to two things: structure and process.

Whenever we look at something we interpret what we see. Looking at the world around us is an active process. We try to see structure.

when someone looks at your design — let’s say a user interface — they need to understand the underlying structure in order to act. How do we design such a structure? We create order.

there are numerous ways to order things. For instance linear structures are great when it comes to finding shoes of your size in a store, hierarchical structures help you navigate through the contents of a book or the collection of a museum. By tagging your favourite pictures you create a rhizomatic network structure.

What kind of order we want to use depends on the people who use the information, their intentions and the situation they are in. It is part of our design process to pick suitable orders for the task at hand.

What do I mean when I speak of a system? I use this word to refer to a set of connected elements and their interplay. In the example of our garden all the living and non-living entities (animals, plants, mirco-organisms, stones, and so forth) are connected and influence each other. They form a system. The same is true for the blocks of programming code that form an application on your phone.

To design a system means to orchestrate the interplay of its elements.

Whatever interactive system you design, you connect a clear understanding of its purpose — won through structure and process — with an intuitive understanding of the materials you use.

the structure-preserving transformations have a bit of quiet ease. You can imagine going on like that, adding dots here and lines there, just as needed, until it is quite elaborate. As long as each transformation preserves the underlying structure, it will retain its wholeness and beauty. They are not based on any pre-existing image; rather, they are “easy, natural steps which arise from the context (ibid. p. 439).” Even decay can be structure-preserving, when the decaying structure was produced by this process: decay reveals underlying levels of organization that are attractively harmonious, because they formed the basis for the elaboration of the whole (e.g. bones, shipwrecks).

Underneath the universe’s apparent laziness is a deeper laziness: a manner of generation that preserves existing structure. A “structure-preserving transformation” does not impose arbitrary (conscious, legible) order on the system, but takes its cue from the existing structure, and elaborates and strengthens it. One of Christopher Alexander’s terms for this is “the unfolding of wholeness”

A center is an aesthetic concept that is somewhere between geometric, phenomenological, and mystical. It is defined recursively – a center is made up of other centers, and in turn makes up other centers (hence the “field of centers” as the primitive). Centers are the basic building blocks of beauty, except that they’re rarely shaped like blocks.

The centers form the seeds for the next structure-preserving transformation. A step-by-step recipe for beauty: start with existing centers or create strong ones that harmonize with the environment elaborate on this structure in a way that preserves and strengthens it elaborate on this NEW structure, which now includes the most recent elaboration repeat until done repair as above, or allow to decay This is the laziest way to do things, so that is how the universe does it.

Clouds are beautiful and never a mess because they are products of structure-preserving processes. However, humans are capable of performing both structure-preserving and structure-destroying processes.

As complex as life seems, a typical human’s behavioral repertoire is made up of a small number of behaviors. These few behaviors make up life; they determine feeling and meaning, moment to moment, day to day.

Here is the generative method of Christopher Alexander, applied to the way one spends one’s time, in pursuit of deep laziness: 1. Find the centers.

A well-developed center will be easy to see; it will produce positive emotion, a feeling of quiet ease, of non-separateness from the world. It will carry many layers of elaboration and generation. It may be completely worked into the fabric of life, touching and intertwining with other centers.

Behavioral “centers” are the things that feel most like reflections of your own self, that seem to connect effortlessly to the underlying wholeness in your life.

2. Elaborate the centers No one has a routine that works perfectly, unchanged, forever, every season of the year. (If they did, I doubt they’d be reading this.) On the contrary, behaviors and rituals must change and self-repair as the individual and circumstances change.

Notice the structure, notice the misfit (or just the lack of elaboration), adapt the structure in a way that strengthens it. “Each smaller thing has been given its shape after, and in relation to, the larger thing that was established first,” Christopher Alexander says (ibid., p. 437). “It is that which creates the harmonious feeling, since it is that which makes each part adapted and comfortable.”

3. Repair or allow to decay Some centers will be receding while others increase in intensity. Even those who crave strict, unchanging routines must adapt their routines to life changes; the deeply lazy are constantly adjusting.

Christopher Alexander distinguishes “generated” forms (those created by repeated elaboration, that is, by structure-preserving transformations) from “fabricated” forms (those created in a top-down manner from a pre-existing image, without any kind of interactive unfolding). In terms of the behavioral repertoire, equivalents might be “elaborated time” (lazy time, experienced as an unfolding and elaboration of behavioral centers) and “scheduled time” (behaviors legibilized and organized top-down to satisfy a pre-existing image of proper behavior

Elaborated time is reached in easy steps, a natural progression arising from each particular context. Its essence is doing the most natural, lazy thing that accords with the context of the whole person and all of the accompanying circumstances.

When storytelling, 3 is the minimum number of samples necessary to trigger pattern recognition. 1 is all by itself, 2 might be a coincidence, but 3 is a pattern

For one, there’s a self-discovery that happens when you revisit things you’ve accumulated over a period of time. You look back and begin to recognize patterns in your own thinking.

Stuart’s idea was that if you could containerize knowledge, like break knowledge down into chunks, then you could reuse those chunks to solve problems in different ways.

Absorb what is useful. Discard what is not.

Systems are better than goals Persuasion skills are vital for success Becoming a master of one field is very difficult. Get great at two (or more) fields and become the best at the overlap.

The increase in value of flipping a higher-order bit outweighs flipping all the lower-order bits combined.

aim to identify and flip the highest order bit because if you don't, you're not going to be able to make up for it by flipping everything else.

The highest order bit implies that you want to always be working on the most important thing. Almost by definition, the most important thing is the thing that moves the needle for your work, not necessarily the thing that is most tractable right now.

You don't always know which bit is the highest-order one, how to flip it, or even what the flipped version looks like.

It takes time to develop the intuition to see bits in the right order and how to flip them. You develop this intuition by producing more work and flipping more bits, even if they end up being lower-order ones.

You don't always have the right resources to flip the highest-order bit. Even the most successful companies in the world are limited by resource constraints. To get more resources, you'll need to flip some lower-order bits first.

Startup decision-making is therefore an exercise in: 1. Inheriting an environment that has many shocking flaws. 2. Identifying the highest-order flaw that you have the ability to address with your limited resources. 3. Picking the best available action to fix it, even if it risks making some lower-order flaws worse. 4. Starting again from step 1 with, hopefully, a less egregious set of shortcomings to choose from.

The principle of least effort is a broad theory that covers diverse fields from evolutionary biology to webpage design. It postulates that animals, people, and even well-designed machines will naturally choose the path of least resistance or "effort".

A mental model is an explanation of someone's thought process about how something works in the real world. It is a representation of the surrounding world, the relationships between its various parts and a person's intuitive perception about his or her own acts and their consequences. Mental models can help shape behaviour and set an approach to solving problems (similar to a personal algorithm) and doing tasks.

people consistently consider changes that add components over those that subtract them — a tendency that has broad implications for everyday decision-making.

Adams et al. demonstrated that the reason their participants offered so few subtractive solutions is not because they didn’t recognize the value of those solutions, but because they failed to consider them. Indeed, when instructions explicitly mentioned the possibility of subtractive solutions, or when participants had more opportunity to think or practise, the likelihood of offering subtractive solutions increased. It thus seems that people are prone to apply a ‘what can we add here?’ heuristic (a default strategy to simplify and speed up decision-making). This heuristic can be overcome by exerting extra cognitive effort to consider other, less-intuitive solutions.

we propose that the bias towards additive solutions might be further compounded by the fact that subtractive solutions are also less likely to be appreciated. People might expect to receive less credit for subtractive solutions than for additive ones. A proposal to get rid of something might feel less creative than would coming up with something new to add, and it could also have negative social or political consequences

I often have people newer to the tech industry ask me for secrets to success. There aren’t many, really, but this secret — being willing to do something so terrifically tedious that it appears to be magic — works in tech too.

The only “trick” is that this preparation seems so boring, so impossibly tedious, that when we see the effect we can’t imagine that anyone would do something so tedious just for this simple effect.

I don’t disagree: being able to offload repetitive tasks to a program is one of the best things about knowing how to code. However, sometimes problems can’t be solved by automation. If you’re willing to embrace the grind you’ll look like a magician.

People said I did the impossible, but that’s wrong: I merely did something so boring that nobody else had been willing to do it. Sometimes, programming feels like magic: you chant some arcane incantation and a fleet of robots do your bidding. But sometimes, magic is mundane. If you’re willing to embrace the grind, you can pull off the impossible.

Good plans demand that you pay attention to what might go wrong, as much as (if not more than) what might go right. But, the less I know about a subject, the more likely I am to create a simplistic plan. This is because I lack the experience to know what’ll go wrong.

If I can’t see 7 moves ahead, should I even try to plan for it? I’m starting to think the answer is no. Instead, I need to determine where I want to go. Then, I can try out some approaches. As I do, the environment will teach me what I need to know—so long as I keep paying attention.

When Evergreen notes are factored and titled well, those titles become an abstraction for the note itself. The entire note’s ideas can then be referenced using that handle

Some effective note “API design” techniques: separation of concerns (Evergreen notes should be atomic), sharp titles (Prefer note titles with complete phrases to sharpen claims), and positive framings (Prefer positive note titles to promote systematic theory).

Once I came to the conclusion that I'd probably quit, and therefore discounted the till-your-death-do-us-part slow accumulation of firm-specific capital, I realized something which is fundamentally true of a lot of day jobs. Nothing I did at the job mattered, in the long run.

Sure, in the short run, I was writing XML files and Java classes which, knock on wood, successfully let my employers ship an examination management system to their client (a major university). I was a really effective Turing machine which accepted emails and tickets as input and delivered (occasionally) working code and Excel files as output. But no matter how much I spun, nothing about my situation ever changed. I worked my week, got to the end of it, and had nothing to show. The next week there would be more emails and more tickets, exactly like the week before. The week after that would be more of the same. And absolutely nothing about my life would change. I'd end the week with nothing.

Don't end the week with nothing. Prefer to work on things you can show. Prefer to work where people can see you. Prefer to work on things you can own.

Because when your work is in public, you can show it to people. That's often the best way to demonstrate that you're capable of doing work like it.

Work you can show off, though, is prima facie evidence of your skills. After your portfolio includes it, your ability to sell your skills gets markedly better.

Thus my first piece of advice: if you have the choice between multiple jobs, all else being equal, pick the one where you are able to show what you've worked on. This could mean working on a language stack where work biproducts are customarily OSSed (e.g. Rails) versus one which isn't (e.g. C#). This could mean working on particular projects within the organization which like external visibility (e.g. Android) rather than projects which don't (e.g. AdWords plumbing -- presumably Google will pay you a lot of money to do that, but consider it compensation for not being able to talk about it). This could mean working in industries which default to being open rather than those which default to being closed.

Even at very open companies there exists lots of secret sauce, but most of the valuable work of the company is not particularly sensitive, and much of it has widely generalizable lessons. Write about those lessons as you learn them. If at all possible, publish what you write. Even if it is published to an audience of no one, you will be able to point people back to it later.

If you cannot build things you can show at work, you should build things you can show outside of work. Companies in our industry are gradually becoming more reasonable about IP assignment clauses -- there's less of the "we own everything you think of at any point in your employment" nonsense these days. Even at my very straight-laced Japanese megacorp, they were willing to write an exception into the employment contract for a) OSS work that I did outside of company hours and b) Bingo Card Creator. I offered them this in exchange: "If you let me continue working on these, I'm going to learn lots of skills which I can put to the use of the company. Normally you invest lots of money sending engineers to conferences and professional training. This is even better for you: I'll learn more with no operating expenditure and no decrease in billing efficiency." That's an offer you can make to substantially any employer.

Vanishingly few people in our industry have the profile of rock stars. They can still have substantial profile among the audience of "people professionally relevant to them." That might be as tightly scoped as "people with hiring authority for front-end developers in my metro area", which might be a set of, what, a couple of dozen folks?

You don't have to optimize for "sexy" projects. You know, sexy projects: I don't know how to describe them but I know it when I see it. Most engineering work isn't intrinsically sexy. I would, however, optimize for impact and visibility.

Consumption Is Sometimes Valuable, But Creation Moves You Forward I'll close with my usual advice to peers: reading this email was valuable (knock on wood). Watching Jason's video is valuable. Rolling up your sleeves and actually shipping something is much, much more valuable. If you take no other advice from me ever, ship something. You'll learn more shipping a failure than you'll learn from reading about a thousand successes. And you stand an excellent chance of shipping a success -- people greatly overestimate how difficult this is. Just don't end the week with nothing.

Heuristics are the strategies derived from previous experiences with similar problems. These strategies depend on using readily accessible, though loosely applicable, information to control problem solving in human beings, machines and abstract issues.

Programmable attention centers around making high-level decisions that orchestrate how we later spend our attention.

Ideal systems should have a low skill floor (and ideally a high skill ceiling) to use. A good example here is how implementations of spaced repetition have improved over time.

Good systems should also be able to increase impact whether it is productivity, learning or something else by accounting for context. The main way this happens is by surfacing an item into attention in a timely manner.

Granularity perhaps not the best term here is about how higher-level decisions are able to orchestrate many smaller tasks. An example of low granularity is email snoozing. Your decision to snooze an email affects a single email once. Conversely, spaced repetition software has high granularity. Your decision to use the software and trust the algorithm allows you to learn a large number of items. The term “programmable attention” is evocative because it captures the idea that automation is essential for high-granularity systems.

Lastly, we want users to trust the system with their attention. Many systems nowadays try to hijack our attention. Programmable attention wants us to explicitly control it. We want to build tools and systems that we can trust with our attention and to resurface things that we don’t want to slip through the cracks.

But despite the implications of the title Genetic Studies of Genius, the Termites’ high IQs did not predict high levels of creative achievement later in life. Only a few made significant creative contributions to society; none appear to have demonstrated extremely high creativity levels of the sort recognized by major awards, such as the Nobel Prize.

As the study evolved over the years, the term gifted was substituted for genius. Although many people continue to equate intelligence with genius, a crucial conclusion from Terman’s study is that having a high IQ is not equivalent to being highly creative. Subsequent studies by other researchers have reinforced Terman’s conclusions, leading to what’s known as the threshold theory, which holds that above a certain level, intelligence doesn’t have much effect on creativity: most creative people are pretty smart, but they don’t have to be that smart, at least as measured by conventional intelligence tests.

These results confirmed Terman’s findings that creative genius is not the same as high IQ. If having a very high IQ was not what made these writers creative, then what was?

Today’s neuroimaging tools show brain structure with a precision approximating that of the examination of post-mortem tissue; this allows researchers to study all sorts of connections between brain measurements and personal characteristics. For example, we know that London taxi drivers, who must memorize maps of the city to earn a hackney’s license, have an enlarged hippocampus—a key memory region—as demonstrated in a magnetic-resonance-imaging, or MRI, study.

Based on all this, I surmised that observing which parts of the brain are most active during free association would give us clues about the neural basis of creativity. And what did we find? Sure enough, the association cortices were wildly active during REST.

I realized that I obviously couldn’t capture the entire creative process—instead, I could home in on the parts of the brain that make creativity possible. Once I arrived at this idea, the design for the imaging studies was obvious: I needed to compare the brains of highly creative people with those of control subjects as they engaged in tasks that activated their association cortices.

And we examined a condition that we called random episodic silent thought, or REST: we asked subjects to lie quietly with their eyes closed, to relax, and to think about whatever came to mind. In essence, they would be engaged in “free association,” letting their minds wander. The acronym REST was intentionally ironic; we suspected that the association regions of the brain would actually be wildly active during this state.

As I hypothesized, the creative people have shown stronger activations in their association cortices during all four tasks than the controls have. (See the images on page 74.) This pattern has held true for both the artists and the scientists, suggesting that similar brain processes may underlie a broad spectrum of creative expression. Common stereotypes about “right brained” versus “left brained” people notwithstanding, this parallel makes sense. Many creative people are polymaths, people with broad interests in many fields—a common trait among my study subjects.

So far, this study—which has examined 13 creative geniuses and 13 controls—has borne out a link between mental illness and creativity similar to the one I found in my Writers’ Workshop study. The creative subjects and their relatives have a higher rate of mental illness than the controls and their relatives do (though not as high a rate as I found in the first study), with the frequency being fairly even across the artists and the scientists. The most-common diagnoses include bipolar disorder, depression, anxiety or panic disorder, and alcoholism. I’ve also found some evidence supporting my early hypothesis that exceptionally creative people are more likely than control subjects to have one or more first-degree relatives with schizophrenia.

As in the first study, I’ve also found that creativity tends to run in families, and to take diverse forms. In this arena, nurture clearly plays a strong role. Half the subjects come from very high-achieving backgrounds, with at least one parent who has a doctoral degree. The majority grew up in an environment where learning and education were highly valued.

Many creative people are autodidacts. They like to teach themselves, rather than be spoon-fed information or knowledge in standard educational settings. Famously, three Silicon Valley creative geniuses have been college dropouts: Bill Gates, Steve Jobs, and Mark Zuckerberg. Steve Jobs—for many, the archetype of the creative person—popularized the motto “Think different.” Because their thinking is different, my subjects often express the idea that standard ways of learning and teaching are not always helpful and may even be distracting, and that they prefer to learn on their own.

I can attest to the fact that most software companies build software around specifications and mockups — a written description of a given feature, and a collection of images that approximates the final interface. The ‘conventional’ way software engineers work is to take these specifications (usually written by the product or program manager) and the mockups, and then implement them with feedback from a designer. But Apple’s approach was different: they focused on interactive, working demos, and made that the minimum unit of discussion.

Kocienda calls this process of prototype -> demo -> improvement -> repeat a process of ‘creative selection’ … hence the title of the book.

My takeaway from this section of Kocienda’s book is that demos should be the basic unit of discussion if you’re creating user-facing software from scratch.

My friends at Google tell me that whiteboarding and spec-writing is a core part of their process; which begs the question: could it be that the domains at which Google excels — e.g. complex distributed systems operating at massive scale — benefit primarily from a spec-driven approach, whereas ‘creative selection’ works better for user-facing products?

We arrived at our final decisions only with judgment and time. Heuristics are like this. They’re subjective.

A natural implication of this, though, is that the process of creative selection cannot be made within the context of a large team. Kocienda argues that Apple’s process calls for a small-team approach:

Inspiration, which means thinking big ideas and imagining about what might be possible, as when Imran saw how smooth finger tracking would be the key to people connecting to iPhone experiences through touch. Collaboration, which means working together well with other people and seeking to combine your complementary strengths, as when Darin and Trey helped me make the insertion point move correctly in WebKit word processing Craft, which means applying skill to achieve high-quality results and always striving to do better, as when the Safari team made the web browser faster and faster by running the Page Load Test, trying to understand what this test program told us about our software, and using these findings to optimizing our code. Diligence, which means doing the necessary grunt work and never resorting to shortcuts or half measures, as when we persisted through the tedium of fixing cross-references to get Safari to build in the lead-up to the Black Slab Encounter Decisiveness, which means making tough choices and refusing to delay or procrastinate, as when Steve Jobs made me pick the better keyboard layout for the iPad on the spot while he waited rather than just offering the two different designs Bas and I developed Taste, which means developing a refined sense of judgment and finding the balance that produces a pleasing and integrated whole, as when we made the choice to offer a QWERTY keyboard layout for the iPhone Empathy, which means trying to see the world from other people’s perspectives and creating work that fits into their lives and adapts to their needs, as when Scott Herz made a game to find the best size for touch targets so it was comfortable to tap the iPhone display and accommodated people with varying levels of dexterity

Ultimately, I will be arguing that to foster optimal human innovation, digital creative tools need to be interoperable, moldable, efficient, and community-driven.

For the purposes of this essay, “digital creative tool” will refer to tools that foster creative thought in their human users, whether that be a design, writing, or multimedia tool.

The genesis of many original creative tools were inspired by the work of thinkers such as Douglas Engelbart, who published a vision of tools empowering humans in his pioneering 1962 paper Augmenting Human Intellect: A Conceptual Framework. As could be surmised, his goal was to use digital tools to augment human intelligence and thus boost our collective intellect.

Engelbart’s work, along with many other designers and programmers such as Ivan Sutherland (Sketchpad³), J.C.R. Licklider (Intergalactic Computer Network⁴) and Alan Kay (Dynabook⁵ and OOP⁶) clarified a new approach in which computers weren’t merely executors, but “joyful” machines that could expand human thought itself (Engelbart). As Engelbart put it, “tools… will serve as new media of expression and inspiration to creativity.”

“Bill Atkinson, its developer, described HyperCard as “an erector set for building applications. Simply put, you could build your own software using HyperCard, with each program made up of ‘stacks’ of ‘cards’. Each card could contain text and images, as well as interactive elements like buttons, with the ability to interconnect between other cards.” HyperCard was a tool for making tools, and was unique in its focus on encouraging open collaboration between users.

The origins of digital creative tools show that the most boundary-pushing and high-potential tools were often interoperable, moldable, community-driven, abstracted, and efficient, thus actualizing creativity within the tool itself. Upon review, it is clear that the fundamental human-computer interaction principles of the past have remained unchanged, such as the direct manipulation of graphical objects, the mouse, and windows. However, it is also evident that our expectations for a computer’s capacity to understand and serve us has expanded considerably

Despite their brief moment in the internet limelight, creative computing communities have failed to maintain power over the software they operate. A close examination of the digital creative tools landscape today reveals that most innovation in this space has shifted away from the computer users (as seen through Actionscript and Flash) to the heavily funded R&D departments of the dominant Silicon Valley technology companies.

This business-driven innovation fails to invest in the creativity of human beings themselves, instead seeking to streamline our capability to act as an execution machine. Directionally, this thinking is clearly in contradiction with the ideas of early digital creative tool pioneers such as Engelbart and Kay. Furthermore, this reality proves the need for us to reconsider our priorities for progress if we truly wish to foster greater human innovation.

When considering the Obscure Features Hypothesis for Innovation,¹² which states that every innovative solution is built upon at least one new or commonly overlooked feature of a problem, you can see how AI may never advance enough to take the jobs of Chief Innovation Officers.” While there are situations in which computers are able to figure out the steps needed to get from state A to state B, it is still within the limitations of pre-human-mandated decomposition. This means that the actions we take with computers are steps that we have already broken down in our head in order to achieve an end result.

As Shan Carter and Michael Nielsen explain succinctly in Using Artificial Intelligence to Augment Human Intelligence, “Intelligence Augmentation (IA), is all about empowering humans with tools that make them more capable and more intelligent, while Artificial Intelligence (AI) has been about removing humans fully from the loop.”

The lack of interoperability¹³ between creative tools means that all work created within a tool is confined to the limitations of that tool itself, posing a hindrance to collaboration and limiting creative possibility. If tools are meant to amplify the power of our brains and take over the mechanical aspects of human thought, limiting creation to a single piece of software’s capabilities is clearly antithetical to creativity.

Interoperability has been key to sufficiently building upon each other's innovations and contributions, while spurring the field of development’s speed of innovation.

However, it is also worth examining the issues associated with interoperability. Interoperability can often slow down improvements and lead to inconsistent adoption of open standards. A poignant example of this can be found in the lack of universal browser compatibility for HTML/CSS features, which adds unnecessary complexity to web development work.

Standardization would fundamentally change the tide of digital creative tools for the better by allowing in more collaborators, making space for greater tooling innovation, and expanding a project’s creative constraints beyond any one tool itself.

Returning to Engelbart’s guiding principle, computers have the power to change and expand human thought. But to do so, software must adapt to suit the user's unique thought process. This idea shines a light on the importance of a tool’s moldability, as measured by how easily the software can be customized to the average non-programmer’s needs.

As Bill Gaver wrote in Designing for Homo Ludens: “The designer’s role in this is not like that of a doctor, prescribing cures for people’s ills; nor is the designer a kind of servant, developing technologies that people know they want. Instead, designers should be provocateurs, seeking out new possibilities for play and crafting technologies that entice people to explore them.”

In considering how computers might become better co-creators and actualizers, we can begin to identify more granular ways that these machines might become more classically “efficient” in facilitating human creativity. However, it is important that we first acknowledge the tension between efficiency and creativity. While on the surface the two dimensions may seem at odds with one another, this does not have to be the case. The computer’s execution skill set lends itself to minimizing the need for any sort of repetitive or monotonous work that might hinder the creative process. This truth refocuses our attention on fostering creative thought (such as ideation) as opposed to linear thought (such as creating functional architectures), seeing as the computer is already highly skilled at accommodating the latter.