A feed of passages I've highlighted from articles across the web.Subscribe via RSS
Well, if you write the way I do, which means that you start something and you rewrite it, especially the beginning part, you rewrite it time and time and time again, at some point you feel, not that it’s beyond repair, but that there’s nothing more you can do to fix it. In other words, as far as you’re concerned, that’s what you were going to write.
written in plain language that is easy to understand for non-technical users and that does not imply that the mistake is the user's fault
Hierarchical structures are usually forced and artificial. Interwingularity is not generally acknowledged — people think they can make things hierarchical, categorizable and sequential when they can't.
We must remember quantitative data is only relevant to the actions currently available and therefore has the potential to limit our thinking.
people set higher standards of evidence for hypotheses that go against their current expectations. We tend to search for, interpret, favor, and recall information in a way that confirms or supports our prior beliefs or values.
“pay attention to what users do, not what they say”. User actions and feedback is seldomly aligned, and this can introduce inaccurate data in the design process if we rely too much on user feedback alone.
False consensus effect A pervasive cognitive bias that causes people to see their own behavioral choices and judgments as relatively common and appropriate to existing circumstances.
Meadows defines a system as “an interconnected set of elements that is coherently organized in a way that achieves something.”
I’d suggest that the interconnections in a design system include design constraints, the feedback loop between implements of the design system and the system’s consumers, the cadence at which implementers update documentation, the amount of money the company invests in the design system, and the way that the component library reads design tokens.
Meadows points out that “purposes are deduced from behavior, not from rhetoric or stated goals,” meaning that the only real way to understand a system’s purpose is by watching its behavior. As implementers of design systems, we often try to dictate the system’s purpose. We might state goals like: “the design system will improve design consistency.” Unfortunately, saying it doesn’t make it true. We need to make sure that we understand the different elements and interconnections in our system. Modifying the way elements are interconnected can allow us to adjust the purpose of the system — but we need to make sure that we understand it first.
the way HSL calculates lightness is flawed. What most color spaces don’t take into account is that different hues are inherently perceived as different levels of lightness by the human eye—at the same level of mathematical lightness, yellow appears lighter than blue.
Tiny Wins are often shortcuts. They save a user’s time by getting rid of existing steps — physical or mental — required to perform an action. This is a really useful way to think about the types of changes we saw above, and a good way to differentiate them from other low-hanging fruits that don’t belong on your lists.
Tiny Wins are high impact. They affect things that the majority of users interact with on a regular basis. If the change won’t have the compounding effects we’ve discussed, it doesn’t belong on the list.
Tiny Wins are low effort. These projects are straightforward, scoped, and takes a short amount of time. If the change requires a significant amount of time and effort, it doesn’t belong on the list.
Tiny Wins are standalone. These changes are small, scoped, and provide their own value. If the change can’t be appreciated on its own, it doesn’t belong on the list.
I noticed something weird about the issues we solved with these changes. They were almost never reported. Hundreds of people were ecstatic when we added that arrow to PR pages. Out of those, not a single one indicated that this flow was confusing. A lot of people assumed it was their own fault for not just “getting” it. Others get so accustomed to these flows that they don’t even notice their anxiety. If they do, it’s just part of life. The status quo. Something to live with, not improve.
MVPs and iteration are powerful tools that should be leveraged by companies looking to move quickly. But Tiny Wins are much more potent when it comes to filling the gaps, improving retention, and nurturing your community of users.
Because of this, these changes were perceived and acknowledged as fresh, complete features. They communicated to users that they were being listened to. These features bred excitement, goodwill, and likely loyalty towards their respective companies. Hell, they probably even contributed to some organic growth.
“You’re efficient when you do something with minimum waste. And you’re effective when you’re doing the right something.”
DeMarco defines slack as “the degree of freedom required to effect change. Slack is the natural enemy of efficiency and efficiency is the natural enemy of slack.” Elsewhere, he writes: “Slack represents operational capacity sacrificed in the interests of long-term health.”
Having a little bit of wiggle room allows us to respond to changing circumstances, to experiment, and to do things that might not work.
Decision fatigue can lead to poor choices and irrational trade-offs in decision making. It refers to the deteriorating quality of decisions after a long session of decision making. Too many decisions end up depleting our willpower, to the point that we end up making increasingly poor choices.
Impaired ability to make trade-offs. Trade-offs feature several choices that have positive and negative elements. They are a particularly energy-consuming form of decision making. When we are faced with too many trade-offs to consider, we end up mentally depleted, and we make poor choices.
“Research shows that the more we write about what we are doing and thinking, the clearer our ideas become. Repeated writing on a theme allows for the development of abstract ideas and complex relationships. Furthermore, when we return to earlier entries in our journals, we may discover we are able to answer a question, or we may suddenly understand the importance of a certain thought to the development of our work.” (RMIT Study and Learning Centre, 2012)
Tacit knowledge is knowledge that cannot be captured through words alone. Think about riding a bicycle. Riding a bicycle is impossible to teach through descriptions. Sure, you can try to explain what it is you’re doing when you’re cycling, but this isn’t going to be of much help when you’re teaching a kid and they fall into the drain while you’re telling them to “BALANCE! JUST IMAGINE YOU ARE ON A TIGHTROPE AND BALANCE!”.
tacit knowledge does exist, and understanding that it does exist is one of the most useful things you can have happen to you.
People with expertise in any sufficiently complicated domain will always explain their expertise with things like: “Well, do X. Except when you see Y, then do Z, because A. And if you see B, then do P. But if you see A and C but not B, then do Q, because reason D. And then there are weird situations where you do Z but then see thing C emerge, then you should switch to Q.” And if you push further, eventually they might say “Ahh, it just feels right. Do it long enough and it’ll feel right to you too.”
I explained that deliberate practice is defined as possible only in domains with a long history of well-established pedagogy. In other words, deliberate practice can only exist in fields like music and math and chess.
Key sections to include in your design spec: Vision. Why is this feature important? Why is this feature a priority? Objectives and key results. What is required for this feature to be complete and what metrics should be measured for success? Deliverables and release plan. What user interfaces are being shipped and when? Does it require changes to on-boarding, documentation, billing, and/or marketing? Dependencies. What other non-existing features or functionality is required before it can be released? Open questions. What needs more clarification?
Creativity is the ability to imagine something that isn’t there — yet. Its core ingredient are ideas. An idea is a new thought that springs up from existing knowledge. Thoughts collide — something “clicks” in our mind.
If you want more ideas, it helps to have more knowledge. We first immerse ourselves in a topic and learn about a subject as much as possible. The more thoughts we have on what is, the likelier it is that some of them combine to an image of what could be.
Text is only present while you are reading it, it is a structure in time. Images are structures in space. They are always right there and can be recognised immediately. These anchors in space amidst the flow of words in time help us remember what the text is about.
Choosing the level of information granularity that is right for the audience will prevent wasted effort and make the information easily digestible.
Less is more: don’t document every single detail or meeting, but just enough to clarify intent or direction, provide a recap, justify a decision, or outline how something should work.
Don’t wait until the last minute to document details; do it along the way as you’re researching and designing to challenge your own biases and rationale, keep your project manageable, and produce better outcomes.
Lightweight documentation provides context, creates shared understanding, and gives the team something to refer back to when questions arise. For this type of documentation: Include a plan of what you will do. Keep it short — use a one-page bulleted plan or summary. Communicate how whatever you will do will impact the team. Outline who’s responsible for objectives or artifacts. Include dates to timebox input and the effort.
When the right information is captured at the right level of detail and in the right places, it’s not wasteful. Good documentation leads to better and faster decision making, aids in presenting and justifying design decisions, and reduces the cognitive load for the team by acting as a form of external memory.
Taxonomies may be thought of as hierarchies of categories to group and organize information to be found when browsing, or as a structured set of terms used to tag content so that it can be retrieved efficiently and accurately. Sometimes the same taxonomy may serve both purposes, and sometimes two different taxonomies are used, one for each purpose, for the same content or site.
Understanding the users and their tasks and needs is a foundation for all things UX. Taxonomy building is not any different. …Who are the users? What are they trying to do? How do they currently tackle this problem? What works and what doesn’t?
Understanding the users is of central importance, so let’s consider specifically two techniques we can use to make a taxonomy more suitable for its users: (1) adapting the names or labels of the taxonomy concepts (terms) to the language of the users, and (2) adapting the categorization hierarchy to the expectation of the users.
Fortunately, taxonomies are used to describe digital space so there is flexibility. While a physical object can exist in only one place in a kitchen, a library shelf, or a store shelf, the same taxonomy concept representing an idea may exist in more than one place in a taxonomy hierarchy.
When I encounter inexperienced designers, there’s one thing that always sticks out about the way they think. Or more specifically, the way they don’t think. They don’t think of every design project as a design system. They don’t appreciate the interconnectedness of their design decisions. They solve design problems in isolation, not as a whole.
Systems thinking is the ability or skill to perform problem-solving in complex systems. In application, it has been defined as both a skill and an awareness. A system is an entity with interrelated and interdependent parts; it is defined by its boundaries and is more than the sum of its parts. Changing one part of the system affects other parts and the whole system, with predictable patterns of behaviour.
Systems thinking replaces Design Thinking’s reductionism (the belief that everything can be reduced to individual parts) with: Expansionism (the belief that a system is always a sub-system of some larger system) Analysis (gaining knowledge of the system by understanding its parts) Synthesis (explaining its role in the larger system of which it is a part)
What system thinking really means in design is this: every decision you make affects other past and future decisions. Every pixel you draw is bound by rules governed by your previous choices. When you make a change — no matter how big or small — it may reverberate through your entire design and require other updates to keep the coherence of the system intact.
When you see a design that looks messy and inconsistent, it’s the result of a lack of system thinking. When your design breaks when it’s stretched beyond the happy path, your incomplete system thinking is to blame.
Working atomically and making things reusable early and often — this automates system-wide changes and helps you avoid rework and unintended consequences.
Respecting the butterfly effect. Is a decision in isolation, or will its impact reverberate across other areas of your system? Never make a design decision without first understanding the scope of its consequences.
Empowering yourself (or others) for exploration. A system that’s too prescriptive will stifle creativity. Systems need to evolve organically over time.
Knowledge is not an accumulation of facts, nor is it even a set of facts and their relations. Facts are only rendered meaningful within narratives, and the single-page document is a format very conducive to narrative structure.
On the other hand, the notion of the “document” that is intrinsic to web development today is overdetermined by the legacy of print media. The web document is a static, finished artifact that does not bring in dynamic data. This is strange because it lives on a medium that is alive, networked, and dynamic, a medium which we increasingly understand more as a space than a thing.
richer. A lot of the ideas I talk about in various pieces of writing are connected to one another. When I publish an essay, I’m not done with it. The ideas live on and get renewed, reused, and recycled in later works. Some sentences contain definitions that are core to my mental models, and there are whole paragraphs that might be useful out of context.
When you rock the boat, there will be waves. After you introduce a new feature, change a policy, or remove something, knee-jerk reactions, often negative, will pour in. Resist the urge to panic or rapidly change things in response. Passions flare in the beginning. But if you ride out this initial 24-48 hour period, things will usually settle down. Most people respond before they’ve really dug in and used whatever you’ve added (or gotten along with what you’ve removed). So sit back, take it all in, and don’t make a move until some time has passed. Then you’ll be able to offer a more reasoned response.
• It is much easier to change how you think by changing your behavior, than it is to change your behavior by changing how you think. Act out the change you seek.
• The more you are interested in others, the more interesting they find you. To be interesting, be interested.
• To make something good, just do it. To make something great, just re-do it, re-do it, re-do it. The secret to making fine things is in remaking them.
• You can obsess about serving your customers/audience/clients, or you can obsess about beating the competition. Both work, but of the two, obsessing about your customers will take you further.
• Separate the processes of creation from improving. You can’t write and edit, or sculpt and polish, or make and analyze at the same time. If you do, the editor stops the creator. While you invent, don’t select. While you sketch, don’t inspect. While you write the first draft, don’t reflect. At the start, the creator mind must be unleashed from judgement.
In The Timeless Way of Building (1979), Christopher Alexander argues for the counterintuitive proposition that feeling (in the sense of perceiving the beauty and “life” of a space), unlike ideas or opinions, is quite objective; there is an astounding level of agreement between people about how different environments and buildings feel, though there may be little agreement of opinions or ideas about them in general.
centers are zones, sets of points in space, that have wholeness or coherence. They are things that have a particular “fit” with human perception and cognition, such that they are easy to perceive, remember, and describe as wholes. An apple is a center, its stem is a center, a spot on the apple’s surface is a center, perhaps its core is a center, but a random square centimeter of flesh within the apple is not a center.
Good centers also work together: centers of solid matter create coherent space between them, and centers together form a coherent whole. A few rules or features of centers (the following list is a quotation from p. 83): The sets which appear as entities are often locally symmetrical – but not always. The entities are usually bounded: that is, at their edge, there is often a sharp change of structure. Some of the entities are marked by an internal center where there is another change of continuity near the middle of the center itself. There is a simplicity and regularity about these sets which marks them as wholes, and makes them function as entities. They are often relatively homogenous across their interior, compared with the surrounding spaces. There is a topological connectivity in them which marks them as compact. They are usually – not always – convex.
Put another way, what use is a pantry full of ingredients if you don’t know how to combine them in a way that makes an appetising meal?
Each pattern’s documentation is preceded with a list of other patterns that employ the upcoming pattern Each pattern’s documentation is followed by a list of other patterns that are required for this pattern What this leads to is a choose-your-own adventure book that allows you to select a particular pattern and get all the information you need about what work must be done to put the pattern to use, as well as what other features you unlock by employing the pattern.
One of the most appealing things about defining a pattern language or design system in this way is the flexibility that it affords. For novice builders who just need a high-level component or concept of their project, they can find it at the top of the heirarchy and use the component as it comes. For those pioneering new experiences or experimenting beyond the confines of existing produce surface, they can take those high-level components and concepts and deconstruct them into their composite parts; they can keep doing this until they have the primitive, atomic components they need to construct a bespoke pattern that still fits with harmony in the wider system.
giving designers and engineers an atomic toolkit and leaving them to it is akin to asking them to create the universe when they just want to create a log in screen.
You can think of our system so far as a tree, with its most primitive elements at the bottom, comprising our atomic elements. These atoms—colors, spacing units, typography, and even dimensions of time, as in animations and transitions—are composed into components, and further composed to create patterns, which are common solutions to common problems.
I’d like to introduce a top-level element of the tree of our design system; the concept. Think of the concept as the definition of the environment, audience, platform, and purpose of a system, with the ultimate goal of narrowing the relevant patterns and components available to only the sensible ones.
The pattern domain is the fixed, middle piece. It describes the patterns and components that are utilized in order to make the story a reality. This can be thought of as the “execution,” or in linguistic terms, words and sentences. These elements stay roughly the same no matter what the concept is, but their frequency of use and degree of usefulness varies with different concepts.
The final domain is the visual domain, and like the conceptual domain, it changes between products, contexts, and platforms. It describes the primitive pieces that comprise the patterns and components—colors, text, and other “atomic” properties that make a brand or product distinct. This can be thought of as the “expression,” or in linguistic terms, the alphabet.
How do you design interactive systems that are easy to understand? Well, it all comes down to two things: structure and process.
Whenever we look at something we interpret what we see. Looking at the world around us is an active process. We try to see structure.
when someone looks at your design — let’s say a user interface — they need to understand the underlying structure in order to act. How do we design such a structure? We create order.
there are numerous ways to order things. For instance linear structures are great when it comes to finding shoes of your size in a store, hierarchical structures help you navigate through the contents of a book or the collection of a museum. By tagging your favourite pictures you create a rhizomatic network structure.
What kind of order we want to use depends on the people who use the information, their intentions and the situation they are in. It is part of our design process to pick suitable orders for the task at hand.
What do I mean when I speak of a system? I use this word to refer to a set of connected elements and their interplay. In the example of our garden all the living and non-living entities (animals, plants, mirco-organisms, stones, and so forth) are connected and influence each other. They form a system. The same is true for the blocks of programming code that form an application on your phone.
the structure-preserving transformations have a bit of quiet ease. You can imagine going on like that, adding dots here and lines there, just as needed, until it is quite elaborate. As long as each transformation preserves the underlying structure, it will retain its wholeness and beauty. They are not based on any pre-existing image; rather, they are “easy, natural steps which arise from the context (ibid. p. 439).” Even decay can be structure-preserving, when the decaying structure was produced by this process: decay reveals underlying levels of organization that are attractively harmonious, because they formed the basis for the elaboration of the whole (e.g. bones, shipwrecks).
Underneath the universe’s apparent laziness is a deeper laziness: a manner of generation that preserves existing structure. A “structure-preserving transformation” does not impose arbitrary (conscious, legible) order on the system, but takes its cue from the existing structure, and elaborates and strengthens it. One of Christopher Alexander’s terms for this is “the unfolding of wholeness”
A center is an aesthetic concept that is somewhere between geometric, phenomenological, and mystical. It is defined recursively – a center is made up of other centers, and in turn makes up other centers (hence the “field of centers” as the primitive). Centers are the basic building blocks of beauty, except that they’re rarely shaped like blocks.
The centers form the seeds for the next structure-preserving transformation. A step-by-step recipe for beauty: start with existing centers or create strong ones that harmonize with the environment elaborate on this structure in a way that preserves and strengthens it elaborate on this NEW structure, which now includes the most recent elaboration repeat until done repair as above, or allow to decay This is the laziest way to do things, so that is how the universe does it.
Clouds are beautiful and never a mess because they are products of structure-preserving processes. However, humans are capable of performing both structure-preserving and structure-destroying processes.
As complex as life seems, a typical human’s behavioral repertoire is made up of a small number of behaviors. These few behaviors make up life; they determine feeling and meaning, moment to moment, day to day.
Here is the generative method of Christopher Alexander, applied to the way one spends one’s time, in pursuit of deep laziness: 1. Find the centers.
A well-developed center will be easy to see; it will produce positive emotion, a feeling of quiet ease, of non-separateness from the world. It will carry many layers of elaboration and generation. It may be completely worked into the fabric of life, touching and intertwining with other centers.
Behavioral “centers” are the things that feel most like reflections of your own self, that seem to connect effortlessly to the underlying wholeness in your life.
2. Elaborate the centers No one has a routine that works perfectly, unchanged, forever, every season of the year. (If they did, I doubt they’d be reading this.) On the contrary, behaviors and rituals must change and self-repair as the individual and circumstances change.
Notice the structure, notice the misfit (or just the lack of elaboration), adapt the structure in a way that strengthens it. “Each smaller thing has been given its shape after, and in relation to, the larger thing that was established first,” Christopher Alexander says (ibid., p. 437). “It is that which creates the harmonious feeling, since it is that which makes each part adapted and comfortable.”
3. Repair or allow to decay Some centers will be receding while others increase in intensity. Even those who crave strict, unchanging routines must adapt their routines to life changes; the deeply lazy are constantly adjusting.
Christopher Alexander distinguishes “generated” forms (those created by repeated elaboration, that is, by structure-preserving transformations) from “fabricated” forms (those created in a top-down manner from a pre-existing image, without any kind of interactive unfolding). In terms of the behavioral repertoire, equivalents might be “elaborated time” (lazy time, experienced as an unfolding and elaboration of behavioral centers) and “scheduled time” (behaviors legibilized and organized top-down to satisfy a pre-existing image of proper behavior
For one, there’s a self-discovery that happens when you revisit things you’ve accumulated over a period of time. You look back and begin to recognize patterns in your own thinking.
The increase in value of flipping a higher-order bit outweighs flipping all the lower-order bits combined.
aim to identify and flip the highest order bit because if you don't, you're not going to be able to make up for it by flipping everything else.
The highest order bit implies that you want to always be working on the most important thing. Almost by definition, the most important thing is the thing that moves the needle for your work, not necessarily the thing that is most tractable right now.
You don't always know which bit is the highest-order one, how to flip it, or even what the flipped version looks like.
It takes time to develop the intuition to see bits in the right order and how to flip them. You develop this intuition by producing more work and flipping more bits, even if they end up being lower-order ones.
You don't always have the right resources to flip the highest-order bit. Even the most successful companies in the world are limited by resource constraints. To get more resources, you'll need to flip some lower-order bits first.
Startup decision-making is therefore an exercise in: 1. Inheriting an environment that has many shocking flaws. 2. Identifying the highest-order flaw that you have the ability to address with your limited resources. 3. Picking the best available action to fix it, even if it risks making some lower-order flaws worse. 4. Starting again from step 1 with, hopefully, a less egregious set of shortcomings to choose from.
A mental model is an explanation of someone's thought process about how something works in the real world. It is a representation of the surrounding world, the relationships between its various parts and a person's intuitive perception about his or her own acts and their consequences. Mental models can help shape behaviour and set an approach to solving problems (similar to a personal algorithm) and doing tasks.
people consistently consider changes that add components over those that subtract them — a tendency that has broad implications for everyday decision-making.
Adams et al. demonstrated that the reason their participants offered so few subtractive solutions is not because they didn’t recognize the value of those solutions, but because they failed to consider them. Indeed, when instructions explicitly mentioned the possibility of subtractive solutions, or when participants had more opportunity to think or practise, the likelihood of offering subtractive solutions increased. It thus seems that people are prone to apply a ‘what can we add here?’ heuristic (a default strategy to simplify and speed up decision-making). This heuristic can be overcome by exerting extra cognitive effort to consider other, less-intuitive solutions.
we propose that the bias towards additive solutions might be further compounded by the fact that subtractive solutions are also less likely to be appreciated. People might expect to receive less credit for subtractive solutions than for additive ones. A proposal to get rid of something might feel less creative than would coming up with something new to add, and it could also have negative social or political consequences
I often have people newer to the tech industry ask me for secrets to success. There aren’t many, really, but this secret — being willing to do something so terrifically tedious that it appears to be magic — works in tech too.
The only “trick” is that this preparation seems so boring, so impossibly tedious, that when we see the effect we can’t imagine that anyone would do something so tedious just for this simple effect.
I don’t disagree: being able to offload repetitive tasks to a program is one of the best things about knowing how to code. However, sometimes problems can’t be solved by automation. If you’re willing to embrace the grind you’ll look like a magician.
People said I did the impossible, but that’s wrong: I merely did something so boring that nobody else had been willing to do it. Sometimes, programming feels like magic: you chant some arcane incantation and a fleet of robots do your bidding. But sometimes, magic is mundane. If you’re willing to embrace the grind, you can pull off the impossible.
Good plans demand that you pay attention to what might go wrong, as much as (if not more than) what might go right. But, the less I know about a subject, the more likely I am to create a simplistic plan. This is because I lack the experience to know what’ll go wrong.
When Evergreen notes are factored and titled well, those titles become an abstraction for the note itself. The entire note’s ideas can then be referenced using that handle
Once I came to the conclusion that I'd probably quit, and therefore discounted the till-your-death-do-us-part slow accumulation of firm-specific capital, I realized something which is fundamentally true of a lot of day jobs. Nothing I did at the job mattered, in the long run.
Sure, in the short run, I was writing XML files and Java classes which, knock on wood, successfully let my employers ship an examination management system to their client (a major university). I was a really effective Turing machine which accepted emails and tickets as input and delivered (occasionally) working code and Excel files as output. But no matter how much I spun, nothing about my situation ever changed. I worked my week, got to the end of it, and had nothing to show. The next week there would be more emails and more tickets, exactly like the week before. The week after that would be more of the same. And absolutely nothing about my life would change. I'd end the week with nothing.
Don't end the week with nothing. Prefer to work on things you can show. Prefer to work where people can see you. Prefer to work on things you can own.
Because when your work is in public, you can show it to people. That's often the best way to demonstrate that you're capable of doing work like it.
Work you can show off, though, is prima facie evidence of your skills. After your portfolio includes it, your ability to sell your skills gets markedly better.
Thus my first piece of advice: if you have the choice between multiple jobs, all else being equal, pick the one where you are able to show what you've worked on. This could mean working on a language stack where work biproducts are customarily OSSed (e.g. Rails) versus one which isn't (e.g. C#). This could mean working on particular projects within the organization which like external visibility (e.g. Android) rather than projects which don't (e.g. AdWords plumbing -- presumably Google will pay you a lot of money to do that, but consider it compensation for not being able to talk about it). This could mean working in industries which default to being open rather than those which default to being closed.
Even at very open companies there exists lots of secret sauce, but most of the valuable work of the company is not particularly sensitive, and much of it has widely generalizable lessons. Write about those lessons as you learn them. If at all possible, publish what you write. Even if it is published to an audience of no one, you will be able to point people back to it later.
If you cannot build things you can show at work, you should build things you can show outside of work. Companies in our industry are gradually becoming more reasonable about IP assignment clauses -- there's less of the "we own everything you think of at any point in your employment" nonsense these days. Even at my very straight-laced Japanese megacorp, they were willing to write an exception into the employment contract for a) OSS work that I did outside of company hours and b) Bingo Card Creator. I offered them this in exchange: "If you let me continue working on these, I'm going to learn lots of skills which I can put to the use of the company. Normally you invest lots of money sending engineers to conferences and professional training. This is even better for you: I'll learn more with no operating expenditure and no decrease in billing efficiency." That's an offer you can make to substantially any employer.
Vanishingly few people in our industry have the profile of rock stars. They can still have substantial profile among the audience of "people professionally relevant to them." That might be as tightly scoped as "people with hiring authority for front-end developers in my metro area", which might be a set of, what, a couple of dozen folks?
You don't have to optimize for "sexy" projects. You know, sexy projects: I don't know how to describe them but I know it when I see it. Most engineering work isn't intrinsically sexy. I would, however, optimize for impact and visibility.
Consumption Is Sometimes Valuable, But Creation Moves You Forward I'll close with my usual advice to peers: reading this email was valuable (knock on wood). Watching Jason's video is valuable. Rolling up your sleeves and actually shipping something is much, much more valuable. If you take no other advice from me ever, ship something. You'll learn more shipping a failure than you'll learn from reading about a thousand successes. And you stand an excellent chance of shipping a success -- people greatly overestimate how difficult this is. Just don't end the week with nothing.
Programmable attention centers around making high-level decisions that orchestrate how we later spend our attention.
Ideal systems should have a low skill floor (and ideally a high skill ceiling) to use. A good example here is how implementations of spaced repetition have improved over time.
Good systems should also be able to increase impact whether it is productivity, learning or something else by accounting for context. The main way this happens is by surfacing an item into attention in a timely manner.
Granularity perhaps not the best term here is about how higher-level decisions are able to orchestrate many smaller tasks. An example of low granularity is email snoozing. Your decision to snooze an email affects a single email once. Conversely, spaced repetition software has high granularity. Your decision to use the software and trust the algorithm allows you to learn a large number of items. The term “programmable attention” is evocative because it captures the idea that automation is essential for high-granularity systems.
Lastly, we want users to trust the system with their attention. Many systems nowadays try to hijack our attention. Programmable attention wants us to explicitly control it. We want to build tools and systems that we can trust with our attention and to resurface things that we don’t want to slip through the cracks.
But despite the implications of the title Genetic Studies of Genius, the Termites’ high IQs did not predict high levels of creative achievement later in life. Only a few made significant creative contributions to society; none appear to have demonstrated extremely high creativity levels of the sort recognized by major awards, such as the Nobel Prize.
As the study evolved over the years, the term gifted was substituted for genius. Although many people continue to equate intelligence with genius, a crucial conclusion from Terman’s study is that having a high IQ is not equivalent to being highly creative. Subsequent studies by other researchers have reinforced Terman’s conclusions, leading to what’s known as the threshold theory, which holds that above a certain level, intelligence doesn’t have much effect on creativity: most creative people are pretty smart, but they don’t have to be that smart, at least as measured by conventional intelligence tests.
These results confirmed Terman’s findings that creative genius is not the same as high IQ. If having a very high IQ was not what made these writers creative, then what was?
Today’s neuroimaging tools show brain structure with a precision approximating that of the examination of post-mortem tissue; this allows researchers to study all sorts of connections between brain measurements and personal characteristics. For example, we know that London taxi drivers, who must memorize maps of the city to earn a hackney’s license, have an enlarged hippocampus—a key memory region—as demonstrated in a magnetic-resonance-imaging, or MRI, study.
Based on all this, I surmised that observing which parts of the brain are most active during free association would give us clues about the neural basis of creativity. And what did we find? Sure enough, the association cortices were wildly active during REST.
I realized that I obviously couldn’t capture the entire creative process—instead, I could home in on the parts of the brain that make creativity possible. Once I arrived at this idea, the design for the imaging studies was obvious: I needed to compare the brains of highly creative people with those of control subjects as they engaged in tasks that activated their association cortices.
And we examined a condition that we called random episodic silent thought, or REST: we asked subjects to lie quietly with their eyes closed, to relax, and to think about whatever came to mind. In essence, they would be engaged in “free association,” letting their minds wander. The acronym REST was intentionally ironic; we suspected that the association regions of the brain would actually be wildly active during this state.
As I hypothesized, the creative people have shown stronger activations in their association cortices during all four tasks than the controls have. (See the images on page 74.) This pattern has held true for both the artists and the scientists, suggesting that similar brain processes may underlie a broad spectrum of creative expression. Common stereotypes about “right brained” versus “left brained” people notwithstanding, this parallel makes sense. Many creative people are polymaths, people with broad interests in many fields—a common trait among my study subjects.
So far, this study—which has examined 13 creative geniuses and 13 controls—has borne out a link between mental illness and creativity similar to the one I found in my Writers’ Workshop study. The creative subjects and their relatives have a higher rate of mental illness than the controls and their relatives do (though not as high a rate as I found in the first study), with the frequency being fairly even across the artists and the scientists. The most-common diagnoses include bipolar disorder, depression, anxiety or panic disorder, and alcoholism. I’ve also found some evidence supporting my early hypothesis that exceptionally creative people are more likely than control subjects to have one or more first-degree relatives with schizophrenia.
As in the first study, I’ve also found that creativity tends to run in families, and to take diverse forms. In this arena, nurture clearly plays a strong role. Half the subjects come from very high-achieving backgrounds, with at least one parent who has a doctoral degree. The majority grew up in an environment where learning and education were highly valued.
Many creative people are autodidacts. They like to teach themselves, rather than be spoon-fed information or knowledge in standard educational settings. Famously, three Silicon Valley creative geniuses have been college dropouts: Bill Gates, Steve Jobs, and Mark Zuckerberg. Steve Jobs—for many, the archetype of the creative person—popularized the motto “Think different.” Because their thinking is different, my subjects often express the idea that standard ways of learning and teaching are not always helpful and may even be distracting, and that they prefer to learn on their own.
I can attest to the fact that most software companies build software around specifications and mockups — a written description of a given feature, and a collection of images that approximates the final interface. The ‘conventional’ way software engineers work is to take these specifications (usually written by the product or program manager) and the mockups, and then implement them with feedback from a designer. But Apple’s approach was different: they focused on interactive, working demos, and made that the minimum unit of discussion.
Kocienda calls this process of prototype -> demo -> improvement -> repeat a process of ‘creative selection’ … hence the title of the book.
My takeaway from this section of Kocienda’s book is that demos should be the basic unit of discussion if you’re creating user-facing software from scratch.
My friends at Google tell me that whiteboarding and spec-writing is a core part of their process; which begs the question: could it be that the domains at which Google excels — e.g. complex distributed systems operating at massive scale — benefit primarily from a spec-driven approach, whereas ‘creative selection’ works better for user-facing products?
We arrived at our final decisions only with judgment and time. Heuristics are like this. They’re subjective.
A natural implication of this, though, is that the process of creative selection cannot be made within the context of a large team. Kocienda argues that Apple’s process calls for a small-team approach:
Inspiration, which means thinking big ideas and imagining about what might be possible, as when Imran saw how smooth finger tracking would be the key to people connecting to iPhone experiences through touch. Collaboration, which means working together well with other people and seeking to combine your complementary strengths, as when Darin and Trey helped me make the insertion point move correctly in WebKit word processing Craft, which means applying skill to achieve high-quality results and always striving to do better, as when the Safari team made the web browser faster and faster by running the Page Load Test, trying to understand what this test program told us about our software, and using these findings to optimizing our code. Diligence, which means doing the necessary grunt work and never resorting to shortcuts or half measures, as when we persisted through the tedium of fixing cross-references to get Safari to build in the lead-up to the Black Slab Encounter Decisiveness, which means making tough choices and refusing to delay or procrastinate, as when Steve Jobs made me pick the better keyboard layout for the iPad on the spot while he waited rather than just offering the two different designs Bas and I developed Taste, which means developing a refined sense of judgment and finding the balance that produces a pleasing and integrated whole, as when we made the choice to offer a QWERTY keyboard layout for the iPhone Empathy, which means trying to see the world from other people’s perspectives and creating work that fits into their lives and adapts to their needs, as when Scott Herz made a game to find the best size for touch targets so it was comfortable to tap the iPhone display and accommodated people with varying levels of dexterity
Ultimately, I will be arguing that to foster optimal human innovation, digital creative tools need to be interoperable, moldable, efficient, and community-driven.
For the purposes of this essay, “digital creative tool” will refer to tools that foster creative thought in their human users, whether that be a design, writing, or multimedia tool.
The genesis of many original creative tools were inspired by the work of thinkers such as Douglas Engelbart, who published a vision of tools empowering humans in his pioneering 1962 paper Augmenting Human Intellect: A Conceptual Framework. As could be surmised, his goal was to use digital tools to augment human intelligence and thus boost our collective intellect.
Engelbart’s work, along with many other designers and programmers such as Ivan Sutherland (Sketchpad³), J.C.R. Licklider (Intergalactic Computer Network⁴) and Alan Kay (Dynabook⁵ and OOP⁶) clarified a new approach in which computers weren’t merely executors, but “joyful” machines that could expand human thought itself (Engelbart). As Engelbart put it, “tools… will serve as new media of expression and inspiration to creativity.”
“Bill Atkinson, its developer, described HyperCard as “an erector set for building applications. Simply put, you could build your own software using HyperCard, with each program made up of ‘stacks’ of ‘cards’. Each card could contain text and images, as well as interactive elements like buttons, with the ability to interconnect between other cards.” HyperCard was a tool for making tools, and was unique in its focus on encouraging open collaboration between users.
The origins of digital creative tools show that the most boundary-pushing and high-potential tools were often interoperable, moldable, community-driven, abstracted, and efficient, thus actualizing creativity within the tool itself. Upon review, it is clear that the fundamental human-computer interaction principles of the past have remained unchanged, such as the direct manipulation of graphical objects, the mouse, and windows. However, it is also evident that our expectations for a computer’s capacity to understand and serve us has expanded considerably
Despite their brief moment in the internet limelight, creative computing communities have failed to maintain power over the software they operate. A close examination of the digital creative tools landscape today reveals that most innovation in this space has shifted away from the computer users (as seen through Actionscript and Flash) to the heavily funded R&D departments of the dominant Silicon Valley technology companies.
This business-driven innovation fails to invest in the creativity of human beings themselves, instead seeking to streamline our capability to act as an execution machine. Directionally, this thinking is clearly in contradiction with the ideas of early digital creative tool pioneers such as Engelbart and Kay. Furthermore, this reality proves the need for us to reconsider our priorities for progress if we truly wish to foster greater human innovation.
When considering the Obscure Features Hypothesis for Innovation,¹² which states that every innovative solution is built upon at least one new or commonly overlooked feature of a problem, you can see how AI may never advance enough to take the jobs of Chief Innovation Officers.” While there are situations in which computers are able to figure out the steps needed to get from state A to state B, it is still within the limitations of pre-human-mandated decomposition. This means that the actions we take with computers are steps that we have already broken down in our head in order to achieve an end result.
As Shan Carter and Michael Nielsen explain succinctly in Using Artificial Intelligence to Augment Human Intelligence, “Intelligence Augmentation (IA), is all about empowering humans with tools that make them more capable and more intelligent, while Artificial Intelligence (AI) has been about removing humans fully from the loop.”
The lack of interoperability¹³ between creative tools means that all work created within a tool is confined to the limitations of that tool itself, posing a hindrance to collaboration and limiting creative possibility. If tools are meant to amplify the power of our brains and take over the mechanical aspects of human thought, limiting creation to a single piece of software’s capabilities is clearly antithetical to creativity.
Interoperability has been key to sufficiently building upon each other's innovations and contributions, while spurring the field of development’s speed of innovation.
However, it is also worth examining the issues associated with interoperability. Interoperability can often slow down improvements and lead to inconsistent adoption of open standards. A poignant example of this can be found in the lack of universal browser compatibility for HTML/CSS features, which adds unnecessary complexity to web development work.
Standardization would fundamentally change the tide of digital creative tools for the better by allowing in more collaborators, making space for greater tooling innovation, and expanding a project’s creative constraints beyond any one tool itself.
Returning to Engelbart’s guiding principle, computers have the power to change and expand human thought. But to do so, software must adapt to suit the user's unique thought process. This idea shines a light on the importance of a tool’s moldability, as measured by how easily the software can be customized to the average non-programmer’s needs.
As Bill Gaver wrote in Designing for Homo Ludens: “The designer’s role in this is not like that of a doctor, prescribing cures for people’s ills; nor is the designer a kind of servant, developing technologies that people know they want. Instead, designers should be provocateurs, seeking out new possibilities for play and crafting technologies that entice people to explore them.”
In considering how computers might become better co-creators and actualizers, we can begin to identify more granular ways that these machines might become more classically “efficient” in facilitating human creativity. However, it is important that we first acknowledge the tension between efficiency and creativity. While on the surface the two dimensions may seem at odds with one another, this does not have to be the case. The computer’s execution skill set lends itself to minimizing the need for any sort of repetitive or monotonous work that might hinder the creative process. This truth refocuses our attention on fostering creative thought (such as ideation) as opposed to linear thought (such as creating functional architectures), seeing as the computer is already highly skilled at accommodating the latter.
Moving further along in the creative process to assembly, it becomes clear that the computer’s execution skill set uniquely lends itself to generating a multitude of options for the human user to choose from. The process of assembly offers ample opportunity to tap into the power of human + computer co-creation by enabling the user to assign conditional logic¹⁴ and dictate numerous variations at once.
Practically, incorporating variation generation and conditional logic in interface design is the difference between designing numerous different states of the same UI button versus designing one state and then defining the logic buttons should follow. Logic shifts much of the needless production work back on the computer to follow linearly, as opposed to the human user working as the linear machine.
But why does this matter? It matters because innovation is largely dependent on the human capacity to think creatively, and there is a strong argument to be made that technology’s primary role is to speed up the creative process and catalyze innovation at a global scale.
These two elements form the Working Backwards process — starting from the customer experience and working backwards from that by writing a press release that literally announces the product as if it were ready to launch and an FAQ anticipating the tough questions.
the Lean Startup model has pretty much become the orthodoxy for launching a new product or company today. And the idea itself is simple to articulate: build an extremely rough version one, launch it, find some users, and then iterate on the product as quickly as you can. You are then expected to use the information generated by your iterations to either stay the course and keep tuning the product, or — in rarer circumstances — to change your approach altogether; Ries calls this ‘pivoting’, because you’ll have to rethink your ‘strategy’
Amazon’s Working Backwards method is simple to describe, but difficult to do. You start out by writing a press release, which follows a very particular structure. You move on to write an attached FAQ document, which addresses a bunch of internal and external issues. These issues include (but aren’t limited to) things like total addressable market, per-unit economics, bill of materials, P&L, key dependencies, and technical feasibility. The total document — both PR and FAQ — should not exceed six pages in length.
Spending time up front to think through all the details of a product, and to determine—without committing precious software development resources—which products not to build, preserves your company’s resources to build products that will yield the highest impact for customers and your business.
I think one way you can think about this is that for a product to succeed, lots and lots of things have to go right. And if even one critical thing goes wrong — be it software design, or some hardware flaw, or market size, or manufacturing costs — the odds are good that the entire product flops. And so when you see enough examples of new product development, you’d quickly realise that all of the product development processes that I’ve described — Lean Startup, Working Backwards, Creative Selection, and Braintrust — are simply ways of iterating cheaply through an idea space, with sufficient feedback, in the hopes of checking enough boxes for success.
All this suggests a peculiar conclusion: as a medium, books are surprisingly bad at conveying knowledge, and readers mostly don’t realize it.
Writing is perhaps the greatest of human inventions, binding together people who never knew each other, citizens of distant epochs. Books break the shackles of time. A book is proof that humans are capable of working magic.
If pressed, many lecturers would offer a more plausible cognitive model: understanding actually comes after the lecture, when attendees solve problem sets, write essays, etc. The lecture provides the raw information for those later activities. Great: that’s a real model, and parts of it are supported by cognitive science. But if we’d begun with this model, would we have chosen live, ninety-minute speeches to convey raw information for a problem set?
The lectures-as-warmup model is a post-hoc rationalization, but it does gesture at a deep theory about cognition: to understand something, you must actively engage with it.
Like lectures, books have no carefully-considered cognitive model at their foundation, but the medium does have an implicit model. And like lectures, that model is transmissionism. Sequences of words in sequences of lines in sequences of pages, the form of a book suggests people absorb knowledge by reading sentences.
I acknowledged earlier that of course, some people do absorb knowledge from books. Indeed, those are the people who really do think about what they’re reading. The process is often invisible. These readers’ inner monologues have sounds like: “This idea reminds me of…,” “This point conflicts with…,” “I don’t really understand how…,” etc. If they take some notes, they’re not simply transcribing the author’s words: they’re summarizing, synthesizing, analyzing.
Books are static. Prose can frame or stimulate readers’ thoughts, but prose can’t behave or respond to those thoughts as they unfold in each reader’s head.
Unlike most non-fiction books, textbooks usually are built around explicit cognitive models. For instance, they often alternate between explanations which introduce concepts, and exercises which push students to think about those concepts in specific ways. It’s great that textbooks didn’t choose their cognitive models by accident. That’s an important first step. But it’s not enough: people still struggle to reliably absorb knowledge from textbooks.
In this section we’ve seen that, like lectures, non-fiction books don’t work because they lack a functioning cognitive model. Instead, like lectures, they’re (accidentally, invisibly) built on a faulty idea about how people learn: transmissionism. When books do work, it’s generally for readers who deploy skillful metacognition to engage effectively with the book’s ideas. This kind of metacognition is unavailable to many readers and taxing for the rest. Books aren’t pulling their weight. Textbooks do more to help, but they still foist most of the metacognition onto the reader, and they ignore many important ideas about how people learn.
Instead, I propose: we don’t necessarily have to make books work. We can make new forms instead. This doesn’t have to mean abandoning narrative prose; it doesn’t even necessarily mean abandoning paper—rather, we can free our thinking by abandoning our preconceptions of what a book is.
Perhaps most remarkably, the powerful ideas are often invisible: it’s not like we generally think about cognition when we sprinkle a blog post with links. But the people who created the Web were thinking about cognition.
How might we design a medium so that its “grain” bends in line with how people think and learn? So that by simply engaging with an author’s work in the medium—engaging in the obvious fashion; engaging in this medium’s equivalent of books’ “read all the words on the first page, then repeat with the next, and so on”—one would automatically do what’s necessary to understand? So that, in some deep way, the default actions and patterns of thought when engaging with this medium are the same thing as “what’s necessary to understand”?
The base rate is simply the probability of some event occurring when you have no other information. In this case, the base rate of getting accepted as regular applicant was 8%, while the base rate for getting accepted as an early applicant was 16%. Without any other information, you should assume that you will experience the base rate.
Select a reference class. In Maggiulli’s opening story the reference class is really simple: this is Stanford’s acceptance rate for single-choice early acceptance vs the overall acceptance rate.
Perform some adjustments. This doesn’t really apply to the Stanford example, but sometimes you want to adjust upwards or downwards based on extra information that you have.
Make a decision based on that base rate of that reference class. This is clear: in Maggiulli’s case, you apply for SCEA, with the understanding that you still might not get in
Virtually any amount of money available to you personally is mouse droppings to your prospective employer. They will not feel offended if you ask for it.
Only negotiate salary after you have agreement in principle from someone with hiring authority that, if a mutually acceptable compensation can be agreed upon, you will be hired.
The second implication is that the inner serf worrying “If I even attempt to negotiate this, the deal will fall through” is worrying for nothing. They’ve got thousands invested in this discussion by this point. They want you. The absolute worst outcome of negotiating an offer in good faith is that you will get exactly the contents of that offer. Let me say that again for emphasis: negotiating never makes (worthwhile) offers worse.
Every handbook on negotiation and every blog post will tell you not to give a number first. This advice is almost always right. It is so right, you have to construct crazy hypotheticals to find edge cases where it would not be right.
Grace is a stronger version of nice, a more private version of generosity, a more intrinsict version charity. It’s in some ways the opposite of justice – which is about dealing to each what they deserve. Grace is when you believe in each person’s inherent worth in all cicumstances, and act according to that.
The same problems will crop up. In structured editors like Notion, the tendency to overstructure is common. Usually you’ll see a lot of structure - a table of nested pages with types, very particular formatting, a well-chosen icon. And then, after putting the structure in place, the content arrived and didn’t fit it. The columns aren’t filled in, or are filled with heterogenous information. The structure is encoded, but doesn’t reflect reality or doesn’t reflect how people actually thought about the information.
The problem is that – much like building software – each thing added dilutes everything else. Too many small moments muddy the water of your major accomplishments.
If your goal is to get a promotion or grow within your organization's career ladder, you should explicitly map your contributions to the level you're operating at and the level where you want to be.
Bring your manager along for the ride by proactively sharing your work and contributions throughout the half. No surprises at review time!
I recommend keeping a list of ongoing accomplishments and achievements throughout each half. Write things down as they happen.
Another strategy here that I have been learning from Marshall Bock on Design Details (episode 363 to be specific) is to maintain a "work journal." This is an ongoing document that includes notes from all of your day-to-day work. I recommend listening to the episode to learn the basics, but one of the best ideas in this practice is to use a short list of verbs to describe your day to day work. For example, participated in or contributed to or shipped.
Startups are hard and if you just naively look at who failed, you’ll see lots of people with good ideas and bad ideas. But it doesn’t follow from this that idea quality doesn’t matter.
My design mentor told me a story about a professor he had in school who used to take his glasses off when he assessed their designs. He said it helped him get a better sense of the layout and visual harmony of their designs.
blockframing with the client or stakeholder dramatically increases your success rate when showing off the final design(s) because the client already knows what’s coming.
Once we finish a design, we create a blockframe directly on top of it so that we can slowly reveal different sections of the design. In our experience, showing the whole design all at once is SUPER OVERWHELMING for the client, and they’ll start asking all kinds of questions that you’re not ready for.
“Until now, we have always had to adapt to the limits of technology and conform the way we work with computers to a set of arbitrary conventions and procedures. With NUI, computing devices will adapt to our needs and preferences for the first time and humans will begin to use technology in whatever way is most comfortable and natural for us.” —Bill Gates, co-founder of the multinational technology company Microsoft
Defining NUIs is difficult, but often when we think about user interfaces that are natural and easy to use, we think of user interfaces where the interaction is direct and consistent with our ‘natural’ behaviour. Examples of user interfaces that people often refer to as natural are multi-touch on the Apple iPad or using mid-air body gestures to control Microsoft’s Kinect console.
Too often, people think that if they just use—for example—gesture interaction, the user interface will be natural.
NUIs “exploit skills that we have acquired through a lifetime of living in the world, which minimizes the cognitive load and therefore minimizes the distraction”. He also states that NUIs should always be designed with the use context in mind.
Joshua Blake lists four guidelines for designing NUIs: Instant expertise Progressive learning Direct interaction Cognitive load (primarily use innate abilities and simple skills)
When you design an NUI, you should take advantage of the users’ existing skills. If users can apply skills they have from other areas of their lives, you’ll save them the trouble of learning something completely new.
lay out a clear learning path for users, one that allows them to start with basic skills and move on to something more advanced step by step, in increments. Never overwhelm novice users with too many options. Instead, keep it simple—think ‘baby steps’.
An NUI should imitate the user’s interaction with the physical world by having a direct correlation between user action and NUI reaction.
the NUI should primarily show information that is relevant to the user’s current interaction. Google Maps, for instance, does this by only showing the scale of the map when the user is zooming in or out.
An NUI should take advantage of the users’ existing skills and knowledge. An NUI should have a clear learning path and allow both novice and expert users to interact in a natural way. Interaction with an NUI should be direct and fit the user’s context. Whenever possible, you should prioritize taking advantage of the user’s basic skills.
Users will ask you to build a specific feature X to fix their problem Y. The sales team may push you to build feature X to meet a client request. However, their X is usually defined by their perception of the problem and limited by their understanding of how the product can be changed to resolve it. Design’s challenge is to investigate the surface-level issues to find the root cause and then solve for that.
As a designer, the most important step is to verify the problem actually exists and is the right problem to solve.
Bad ideas are a natural step in the creative process, can help clarify your thinking, and even show you why something else is a better idea.
After some initial exploration, you should get feedback and reactions from other people starting with your teammates. Observe how they react. When they say something about the design, don’t just pay attention to what they say but ask them why they said it. You should get feedback while you are still exploring, so don’t worry about the details and polish. If people give you negative feedback, don’t take that as a sign the direction is necessarily bad but focus on learning why. It could be that you’re going in the right direction but the current version isn’t quite right or doesn’t fit into their understanding of the problem. Find the gaps in your design or the story and then fill them.
That’s the process of learning anything on the internet. There are several metaskills at work here: Knowing what to search for Decomposing your need into different steps Knowing which guide to follow for each step
The tricky thing is that most of these skills are catch-22s. How do you know to search for when you don’t know what you’re supposed to learn? How do you decompose a problem into steps when you don’t understand a problem? How do you know what the best guide is for each step when you don’t know how that step works?
there’s already a ton of great guides out there for most things you’d want to learn. The real problem is figuring out which ones to follow. If we want to increase the amount of knowledge in the world, the marginal unit of expert time is better spent organizing than producing.
Key takeaways: Users will transfer expectations they have built around one familiar product to another that appears similar. By leveraging existing mental models, we can create superior user experiences in which the user can focus on their task rather than learning new models. Minimize discordance by empowering users to continue using a familiar version for a limited time.
This seems like an obvious thing to say. But as with most such things, the second-order implications are more interesting than the first-order ones. For instance, because expertise is necessary to play the metagame, it is often useful to search for the meta in your domain as a north star for expertise.
I don’t know if it is identifying the meta that helps, or if there is some other deep skill transference that’s going on. But I’ve cautiously reached out to friends who have played serious competitive sports when they were younger, and many of them have had similar experiences. There seems to be something in getting good at a skill tree that helps in latter life. I’d like to think it is a function of exposure: once you see the competitive meta at the top of one skill tree, you begin looking for it everywhere else.