Two important prerequisites for self-organization: Self-organization happens through composition. Composition is combining things with other things to create new things.
You can think of the units and the mechanisms of composition together as an alphabet. Alphabets tend to follow Gall’s Law: simple alphabets produce complex behaviors, complex alphabets produce stupid behaviors. Consider: DNA has only four basepairs, but it was expressive enough to generate you!
When we write, we flatten the cloud of associated ideas in our head into a linearized subset (lossy). The reader then unflattens this linearized subset into their own cloud of associated ideas (lossy). Each lossy step is an opportunity for mutations in understanding to emerge. Useful mutations are remembered (selection) and shared (heredity). And so... mutation, heredity, selection. Ideas evolve!
If we carefully closed the right feedback loops, could we construct a creative flywheel that generates finished works almost by accident, through a stream-of-consciousness process?
Do you ever get the feeling you’re throwing your notes into the void? You jot an idea down. Before long, it gets lost in the shuffle. You forget it was ever there. Does a note for this already exist? Maybe you’ve taken this note before, and countless times before. You search for your notes on a topic. Most of them are scratch notes—rough, incomplete, touched once, then forgotten. This is what broken feedback loops feel like. Everything is feed-forward. There are no opportunities in the interface to stumble back over your notes, to recurse back over them. You’re continually adding energy to the system, but none of it is returning back.
Knowledge gardening, on the other hand, is the emergent result of constructing a feedback system. I recurse over scratch notes, revise them, add to them, refactor them, and combine them with other ideas to form new ideas. Over time, these little recursive acts of watering and weeding accumulate. Useful knowledge grows from the ground-up.
The core game mechanic of Zettelkasten is to file your note some place where you would want to stumble over it again. As you rifle through old notes to find this place, you recurse over notes you had forgotten about, sparking new ideas, which you then write down, and have to file, causing you to rifle through again, sparking more ideas… in a cascade of idea generation. Zettelkasten is a feedback system.
The only way to create a new note in Notational Velocity was through the search bar. Typing would live-searched through your notes. Hitting enter would create a new note, with the contents of the search query as the title for the note. Like Zettelkasten’s game mechanic, this search-or-create mechanic closes a feedback loop. Every time you enter a new idea, you’re recursing over old ideas. Often you find yourself editing, or refactoring an old note, instead of creating a new note. Over time, these microinteractions add up, generating knowledge from the bottom-up.
As we did more research with universities and companies, we came to realize that relationships are the leading driver of meaning and fulfillment at work. If you lack relationships, it’s almost impossible to be fulfilled at work or life in general.
You’d think people in “purpose industries” (like education or healthcare) would always be fulfilled, and people in “functional industries” (like accounting and engineering) wouldn’t be. But we found that industry wasn’t a major predictor of fulfillment. Instead, you look at the way we’re wired as human beings. The number one predictor of our survival thousands of years ago was whether we were in a tribe. So, we were wired to have strong connections with other people because that determined whether we would get lunch or be lunch. Therefore, you need a sense of belonging.
“Look for jobs where you get to work with people you want to work with. Don’t focus as much on the industry. Don’t focus as much on the function. Focus foremost on: Is this a group of people I want to spend 8 hours a day with? People who will energize me? People who will challenge me?” That should be your number one determinant (assuming the salary is fair).
Then once you enter an organization, it’s taking a relationship-first approach. From day one, identify proactively who you want to build relationships with. If you initiate, you’re going to do well. If you’re sitting around waiting for them, you may be waiting a long time.
The next step is to take stock. Every week or month, sit down and ask, “Do I have meaningful relationships at work? Where do I have gaps? Where would I like to build a new relationship?” Treat this as a continuous experiment where you’re measuring, building, and maintaining the relationships you want.
In lieu of Xanadoc links, the modern web has landed on some fairly decent solutions to this issue: hover previews and unfurls.
Ted's proposal for visible links relied on another key Xanadu pattern: parallel documents arranged side-by-side. This pattern allow us to see quotations in their original context, and quickly jump to the source material of documents.
This metaphor of *the-web-as-paper* works quite well. The otherwise immaterial and abstract nature of the web becomes understandable, familiar, and (most importantly) *usable* through the paper metaphor. We have all seen, handled, and created our own paper documents – a set of understandings we bring to our behaviour on the web.
When we designed digital documents to mimic paper, we unnessecarily brought along many of its physical limitations: • We treat the page as the smallest unit of linkable information, instead of the sentence or paragraph. • We remain married to the conventions of traditional typesetting established by paper printers. • We insist on a single view – a mockup of a physical page – rather than embracing the flexible, multiple views possible in a dynamic computational medium.
On cynicism: “Cynicism is not a neutral position – and although it asks almost nothing of us, it is highly infectious and unbelievably destructive. In my view, it is the most common and easy of evils.”
Optimism is “a way of explaining failure, not prophesying success”: If we’ve failed at something, it’s because we didn’t have the right knowledge in time.
Optimism is a stance towards the future: Nearly all failures, and nearly all successes, are yet to come.
* Optimism follows from the explicability of the physical world: If something is permitted by the laws of physics, then the only thing that can prevent it from being possible is not knowing how.
In the long run, there are no insuperable evils: There can be no such thing as a disease for which there can’t be a cure, because bodies are physical things that follow the laws of physics.
It’s not blind optimism, but it’s not pessimism. It’s the very optimistic belief that things will inevitably go wrong, but that each new challenge is an opportunity for further progress.
Criticism is crucial, but pessimism – “a tendency to see the worst aspect of things or believe that the worst will happen; a lack of hope or confidence in the future” – is actively harmful on both individual and societal levels. If pessimism causes people to stop having kids out of fear, that’s bad. If it makes people give up on trying to improve their environments, that’s bad. If it discourages entrepreneurs from starting companies, or encourages them to play their ambitions safe when they do, that’s bad.
Housel gives five reasons pessimism gets so much attention: 1. Optimism appears oblivious to risks, so by default pessimism looks more intelligent. (But, he says, that’s the wrong way to view optimists.) 2. Pessimism shows that not everything is moving in the right direction, which helps you rationalize the personal shortcomings we all have. (Misery loves company.) 3. Pessimism requires action, whereas optimism means staying the course. (Sell sell sell! is more active than “hold.”) 4. Optimism sounds like a sales pitch, while pessimism sounds like someone trying to help you. (But optimism is often the correct result, and pessimists are selling, too.) 5. Pessimists extrapolate present trends without accounting for how reliably markets adapt. (This misses humans’ ability to adapt.)
A dearth of optimism will slowly kill any product, team, or mission. Execution will slow to a halt, and even the most minor tasks take weeks to do. Our optimism and resolve have immense influence in what we accomplish, both at a micro task-by-task level, and when summed up, what we can do over a lifetime.
One of Max’s (previous VP of Design) many memes was “Hard choice, easy life. Easy choice, hard life”. Decisions that feel scary in the moment almost always pay off. The things in my life that I am most proud of have started with this feeling. Go towards it, do hard things.
Over time, I learned that instead of looking at drive-by feedback as an attack on my work, it meant that people cared about what I was working on. Then it became a motivator for me.
This is a freeing milestone to hit. It allows you to have a beginner’s mindset without the self-consciousness of being an actual beginner. Freely admitting that you don’t know the answers allows you to explore, and to bring others along with you. It also changes how you see others. I suddenly realized that no one else really knew what they were doing either. We’re all imposters, figuring it out together.
There’s a quote I love, which is “fear is excitement without breath”. Usually when I feel fear, it’s excitement in disguise. I try to lean into that feeling. I breathe and assess what the worst that could happen if I run towards it. Usually that worst case scenario is not as bad as what my body is telling me.
I'll tell you how it's really done, so you can at least tell your own kids the truth. It's all about users. The most reliable way to become a billionaire is to start a company that grows fast, and the way to grow fast is to make what users want. Newly started startups have no choice but to delight users, or they'll never even get rolling. But this never stops being the lodestar, and bigger companies take their eye off it at their peril. Stop delighting users, and eventually someone else will. Users are what the partners want to know about in YC interviews, and what I want to know about when I talk to founders that we funded ten years ago and who are billionaires now. What do users want? What new things could you build for them? Founders who've become billionaires are always eager to talk about that topic. That's how they became billionaires.
contexts lose a lot of their focusing power when either a) most of your work takes place at one context (e.g. "@computer"), or b) you start using contexts more for taxonomical labeling than to reflect functional limitations and opportunities.
This causes many of us to fashion more or less phoney-baloney "sub-contexts" that reflect some facet of the parent (e.g. "@computer" might contain "@email," "@web," "@code," "@print," and so on). While this makes terrific sense from a logical standpoint (and it can certainly have its uses), it doesn't reflect the true meaning of a context, at least in my own mind: "what tools, resources, opportunities, and limitations are unique to this situation?" or put slightly differently from the perspective of choosing tasks at a given time, "what are the things I can't work on now given where I am and the tools to which I have access?"
try to ensure that any action I identify as a next action can be finished, front to back, in less than 20 minutes time—preferably in fewer than 10 minutes. So, for example, while “Write an article on GTD” is practically useless (that’s a project!), and “Draft ideas for GTD article” is a minor improvement, “Brainstorm six ideas for a 1000 word GTD article” is right in the sweet spot for me. Knowing that 20 minutes is my maximum allowable time for an action also provides a handy baseline for planning what I can accomplish over a given day or week.
Defending a multi-button mouse, Englebart explained to the group that immediate learnability should not always be the goal of a hardware device or a software interface. In many circumstances an effective interface may be complicated and challenging to learn. Englebart offered the keyboard as an example of a complicated input device. It takes considerable time and effort to learn touch-typing, he said, but once mastered, you’ll enjoy a lifetime of fast text input.
This is the part that most leaders miss about brainstorming. If a group feels psychologically safe, they’ll be just as effective coming up with ideas on their own and sharing them as they go. If there’s no psychological safety, the team isn’t performing at their best, and a brainstorm is only going to exacerbate fears of negative judgment.
It’s not that I advocate for no note-taking. I just strongly believe in keeping it as elementary as possible, such that the note-taking itself doesn’t become the thrust of the endeavor. Leonardo da Vinci kept all of his notes in one big book. If he liked something he put it down. This is known as a commonplace book, and it is about how detailed your note-taking system should be unless you plan on thinking more elaborately than Leonardo da Vinci.
Musicians always learn by playing existing songs. For some reason, in visual design, we are expected to just produce tons of original content out of nowhere. It’s not a productive attitude: to learn the most, you should study work that really resonates with you and absorb it by trying to reproduce it. No shame in that!
And then just write things in as they happen. Don’t agonize over getting them all. You will not. The aim is to get the important parts.
**Incremental notes** is my push against this trend of note-taking tools that only live in the present and deny the reality of learning and living through time. We don’t remember things by modifying our past memories – we simply accumulate more, as if adding entries to a log or a journal. We search through them by traversing time, looking for links between ideas and experiences. These are the principles from which I want to build tools that augment our minds.
**Adding new ideas is better than updating old ones.** When our notes become outdated, our natural instinct is to go erase what’s now incorrect and fill that blank with the new information. But in that rewrite, we lose all of the original context we could have remembered about the history of our idea.
Regardless of the intent of the animation, when animations fail to be meaningful, a common cause is that they simply tween between their hidden and visible states, rather than visualizing the actions that triggered the change of state
Action-driven animation elevates *the connections between the views* to become the plots of the storytelling arhcs. In other words, what took you between state and A and state B? In our Email app, you can get between state A (composing visible) and state B (composing hidden) in at least two different ways: sending an email or discarding an email and closing the window.  You’ll make your app easier to understand by visualizing whatever caused the change of state.
PLoS Medicine
Read 2 highlights
**0.1 second** is about the limit for having the user feel that the system is **reacting instantaneously**, meaning that no special feedback is necessary except to display the result.
**1.0 second** is about the limit for the **user's flow of thought** to stay uninterrupted, even though the user will notice the delay. Normally, no special feedback is necessary during delays of more than 0.1 but less than 1.0 second, but the user does lose the feeling of operating directly on the data.
**10 seconds** is about the limit for **keeping the user's attention** focused on the dialogue. For longer delays, users will want to perform other tasks while waiting for the computer to finish, so they should be given feedback indicating when the computer expects to be done. Feedback during the delay is especially important if the response time is likely to be highly variable, since users will then not know what to expect.
“Research shows that the more we write about what we are doing and thinking, the clearer our ideas become. Repeated writing on a theme allows for the development of abstract ideas and complex relationships. Furthermore, when we return to earlier entries in our journals, we may discover we are able to answer a question, or we may suddenly understand the importance of a certain thought to the development of our work.” (RMIT Study and Learning Centre, 2012)
the general rule holds—the background is generally brighter than the objects in a scene, and so the human brain becomes much more used to dark objects against light backgrounds and thus prefers them.
In the scientific literature, black on white is called “positive polarity,” whereas white on black is called “negative polarity.” Numerous studies over decades of research have found that positive polarity displays provide improved performance in a variety of areas.
a positive polarity advantage has been found in error rates and reading speed in a letter identification task (Bauer and Cavonius 1980), the number of transcribed letters onto paper (Radl 1980), subjective ratings on visual comfort (Saito, Taptagaporn, and Salvendy 1993; Taptagaporn and Saito 1990, 1993), text comprehension (A. H. Wang, Fang, and Chen 2003), reading speed (Chan and Lee 2005) and proofreading performance (Buchner and Baumgartner 2007).
In another paper in Ergonomics, Buchner and Baumgartner showed that the benefits of positive polarity displays were independent of ambient light when they compared results of the same experiment run in a darkened room versus one with typical office lighting. (Nor did chromaticity—blue and yellow as opposed to black and white—make a difference.
if a belief is useful to you in achieving your goals, keep it. Otherwise, discard it. An extension of this is, per Graham, if you have to pick between two theories and one is less useful than the other, pick the more useful one regardless of the truth. I believe effective, successful people do some variant of this. Unsuccessful people don’t.
> *design is different. As a designer, I don't matter. My work doesn't matter. Nothing I make matters in the context of my process. It's all about the people you are building for. You're just trying to solve problems for people. Once you realize this, it's the most liberating thing.* Why is this mindset tweak useful? Well, if you’re able to see the goal of your work as ‘solving problems’, then criticism of your work becomes less grating, because it’s no longer about you.
But implicit in optimising for usefulness is the idea that you should *continually reevaluate* your heuristics
Optimising for usefulness consists of four aspects: first, that mindset hacks exist, and that the good ones help you become more effective at life. Second, that we may invert this idea: if you possess a mindset that hinders you from achieving your goals, aggressively look for a better one. Third, ‘optimising for usefulness’ also works when it comes to learning from experience: when something happens to you, prioritise learning the lessons with higher usefulness first. Verify that they work through trial and error. And fourth, prevent yourself from leaping to obvious (or convenient!) but ultimately less useful conclusions.
What I’d like to propose is that self-awareness is the most important design skill, not because it’s a fundamental skill like many of those listed above, but because it unlocks the traits that lead to great teams and great design.
What that means is that if I think I’m self-aware, I need someone — my boss, a colleague, a friend, my husband, *someone* — to corroborate with me. In the study that Dr. Eurich and her team ran, the data showed that while 95% of folks surveyed thought they were self-aware, only 10% to 15% actually were. That’s pretty shocking, especially if we’re wondering if we’d be one of those ten.
The reason we’re assessing ourselves incorrectly is that we’re relying on introspection to evaluate ourselves, and the way we introspect is faulty.
The problem with introspection is that we’re asking “why” to understand ourselves and the world around us, and in doing so it’s incredibly easy to invent answers that feel true but are often very wrong. Things are hidden from our view, or often from our consciousness. We can’t self-evaluate accurately without adding our own bias, our own worldview, and our own “truth.” And these things aren’t facts. **People who are self-aware ask what instead of why**
Let’s put this in a really practical example. Let’s say I got in a fight with my husband — and this is sheerly hypothetical, because obviously we never fight — and I’m reflecting on what went wrong. I can choose one of two routes. One route is I can ask why. *Why* did I respond in that way? *Why* is he being like this? *Why* do we keep getting in the same argument? The other route involves asking what. *What* happened that caused the argument? *What* did I do to escalate it? *What* could either of us do differently next time?
Except that it turned out most of these interventions didn’t work by decreasing cognitive biases or increasing detection of subtle signals; they worked by tamping down on noise.
> When people consider errors in judgment and decision making, they most likely think of social biases like the stereotyping of minorities or of cognitive biases such as overconfidence and unfounded optimism. The useless variability that we call noise is a different type of error. To appreciate the distinction, think of your bathroom scale. We would say that the scale is biased if its readings are generally either too high or too low. If your weight appears to depend on where you happen to place your feet, the scale is noisy. A scale that consistently underestimates true weight by exactly four pounds is seriously biased but free of noise. A scale that gives two different readings when you step on it twice is noisy. Many errors of measurement arise from a combination of bias and noise. Most inexpensive bathroom scales are somewhat biased and quite noisy.
One obvious takeaway is that you’re probably going to get better decision making results if you target noise than you would if you targeted cognitive biases. One of the things that we’ve talked around, but not directly discussed, is this idea that fighting cognitive biases is just plain *hard*, and that you should be sceptical of most interventions designed to combat it.
Morse offers another source of reminder: the designs we create for the web -- and for most software, too-- are not likely to last forever. So... > *Don't fall in love with borders, gradients, a shade of blue, text on blurred photos, fancy animations, a certain typeface, flash, or music that autoplays. Just get attached to solving problems for people.* That last sentence is pretty good advice for programmers and designers alike. If we detach ourselves from our specific work output a bit and instead attach ourselves to solving problems for other people, we'll be able to handle their critiques more calmly. As a result, we are also likely to do better work.
In fact I suspect if you had the sixteen year old Shakespeare or Einstein in school with you, they'd seem impressive, but not totally unlike your other friends. Which is an uncomfortable thought. If they were just like us, then they had to work very hard to do what they did. And that's one reason we like to believe in genius. It gives us an excuse for being lazy. If these guys were able to do what they did only because of some magic Shakespeareness or Einsteinness, then it's not our fault if we can't do something as good.
I think the solution is to work in the other direction. Instead of working back from a goal, work forward from promising situations. This is what most successful people actually do anyway.
It's not so important what you work on, so long as you're not wasting your time. Work on things that interest you and increase your options, and worry later about which you'll take.
That's what you need to do: find a question that makes the world interesting. People who do great things look at the same world everyone else does, but notice some odd detail that's compellingly mysterious.
The way to get a big idea to appear in your head is not to hunt for big ideas, but to put in a lot of time on work that interests you, and in the process keep your mind open enough that a big idea can take roost. Einstein, Ford, and Beckenbauer all used this recipe. They all knew their work like a piano player knows the keys. So when something seemed amiss to them, they had the confidence to notice it.
Much later, I discovered an old nut by investors Warren Buffett and Charlie Munger, which they named the Noah Rule (after the [biblical story](https://en.wikipedia.org/wiki/Noah)): ‘predicting rain doesn’t count, building an ark does’.
It doesn’t matter if Singapore is geopolitically challenged, or that it faces new challenges in the 2020s or that VC money is skipping SG entirely and flowing into Indonesia. What matters for these students are questions at a lower level of abstraction: “What are my career goals, and which companies may I work at to advance towards those goals?” along with “What are the market/regulatory/economic shifts that are *affecting those companies directly*?” These questions operate at the right level of abstraction. Commentary about macro-economic trends do not.
Similarly, this is why the literature around habit-formation should probably be [tossed out](https://commoncog.com/blog/the-power-of-habit/) when talking about organisational culture or societal behaviour. Just because a set of ideas works for one level of a system doesn’t automatically make it relevant to the level of organisation above it. To make this more concrete: policymakers who think that the habit loop is sufficient for initiating behaviour change at the level of a group would likely be burned by [Goodhart’s Law](https://en.wikipedia.org/wiki/Goodhart%27s_law), or be taken by surprise by all the second and third order effects that ripple throughout the niches in a complex adaptive system. They need a different set of ideas to effect behavioural change at higher levels of organisation — ideas like incentive system design, not individual habit formation.
The first implication we have already covered in this essay: be suspicious of ideas that are taken from one level of a complex adaptive system and are then applied to a higher (or lower) level. It is likely that they aren’t as useful as you might imagine them to be.
The second idea is a heuristic that I attribute to my friend, [Jin](https://twitter.com/jin_): “when reasoning about your career, think one level above, or one level below.” Or, as he puts it more succinctly: “think ‘plus or minus one!’”
The reason I’ve spent an entire essay arguing that we should ‘seek ideas at the right level of abstraction’ is because I think that the opposite habit — ‘use high-level analyses as a justification for our actions’— is a particularly pernicious trap for smart, analytical people. We do this because it’s a narrative stereotype: we think that geniuses must extrapolate from high-level analyses to individual action, and therefore we should do the same.
In simpler terms, ‘strong opinions, weakly held’ sometimes becomes a license to hold on to a bad opinion strongly, with downside protection, against the spirit and intent of Saffo’s original framework.
if an idea has certain affordances, and people seem to always grab onto those affordances and abuse the idea in the exact same ways, then *perhaps you shouldn’t use the idea in the first place*
Both forms of failure stem from the same tension. It’s easy to have strong opinions and hold on to them strongly. It’s easy to have weak opinions and hold on to them weakly. But it is quite difficult for the human mind to vacillate between one strong opinion to another.
As technology advances, software will increasingly be chosen not just for how well it addresses its use case, but how it conveys its personality, similar to how we choose our clothes. We’re already beginning to see this shift. In highly individualized spheres like note-taking tools and consumer crypto, software is often chosen based on identity.
By designing everything on the screen to be as photorealistic as possible, the software essentially negated needing to define its style or stance — it merely reflected the world outside of it.
By appealing to *everyone* with these clean lines, gradients, and rounded shapes, no one feels particularly compelled by them. And while these brands serve their purpose in attracting the “right” demographic to their software, they also make it difficult to form a meaningful connection with the user — the product feels like a stylistic blank slate.
Style is not just an eye-catching color or a fun skeuomorphic icon. Style is the indescribable quality that sums up how interacting with something makes a person *feel*. It isn’t simply sprinkling bits of color or animations on top of enterprise software to move metrics. It’s the feeling the user gets when everything about a product feels specific to a certain personality that they can identify and relate to.
Style matters to our economy, our society, and to each of us personally — it allows us to communicate who we are and who we want to be. While it is widely understood that style is the driving force behind [consumer purchasing decisions](https://www.scitechnol.com/peer-review/consumer-purchase-decision-making-process-based-on-the-traditional-clothing-shopping-form-mZst.php?article_id=6373) in things like clothing, I’d argue that the same has become true in technology and software as well.
Extended cognition takes the idea that your mind is ‘on’ your smartphone literally. It says that human cognitive states and processes sometimes spill outside our heads and into objects in our environment. Alleged examples include not just smartphones, but also use of simpler technology (pencil and paper to perform a calculation), our own body (ticking off our fingers when we count), and other people (our spouse who remembers appointments so we don’t have to).
Even during relatively undemanding tasks – e.g. copying a simple coloured pattern made of puzzle pieces – we off-load information processing onto the environment to reduce work for our brains (Ballard et al. 1997). Once one recognises this, one sees it everywhere: a bartender lines up cocktail glasses of different shapes to remember a complex order; a mathematician uses pencil and paper to guide their steps in a calculation; a child uses their fingers to count off the days until their next birthday. These observations reveal that intelligent, adaptive human thought and behaviour need not always be produced by the brain alone. It often involves a two-way interaction between the brain, body, and world (Dennett 1996; Hutchins 1995; Simon 1969).
The hypothesis of extended cognition (HEC) goes beyond this relatively uncontroversial observation in a controversial way. Environmental processes don’t merely interact with our brain to produce thought and behaviour. Those environmental processes have as much claim to be mental or cognitive as their brain-based collaborators.
techxplore.com
Read 2 highlights
individuals often lack global knowledge of the behaviors of others and must estimate them from the observations of their friends' behaviors. In some cases, the structure of the underlying social network can dramatically skew an individual's local observations, making a behavior appear far more common locally than it is globally.
it's the totality of those “nodal points” that indicate one’s own unique perspective. It doesn’t matter if you specifically sought out the nodal point or not, it’s the recognition that counts. When you encounter a piece of life-changing information (no matter how large the change part is), you are simultaneously discovering and creating “yourself,” becoming incrementally more complete. Your perspective (where your gaze is directed) is made up of a meandering line through these points. Learning (or maybe some precursor to learning) is a lot about developing the intuition to recognize when something you find in the world is going to be a nodal point for you.
nngroup.com
Read 5 highlights
Qualitative usability testing aims to identify issues in an interface, while quantitative usability testing is meant to provide metrics that capture the behavior of your whole user population.
Quantitative usability studies are usually summative in nature: their goal is to measure the usability of a system
In contrast, qualitative user studies are mostly formative: their goal is to figure out what doesn’t work in a design, fix it, and then move on with a new, better version.
In comes Jakob Nielsen’s article that recommends qualitative testing with 5 users. There are three main assumptions behind that recommendation: That you are trying to identify issues in a design. By definition, an issue is some usability problem that the user experiences while using the design. That any issue that somebody encounters is a valid one worth fixing. To make an analogy for this assumption: if one person falls into a pothole, you know you need to fix it. You don’t need 100 people to fall into it to decide it needs fixing. That the probability of someone encountering an issue is 31%
In qualitative studies, you’re simply counting usability issues. And, while there is a statistical uncertainty about any number obtained from a quantitative study (how will the average obtained from my study compare with the average of the general population), there is absolutely no uncertainty in a qualitative study — any error identified is a legit problem that needs to be fixed.
notes.andymatuschak.org
Read 1 highlight
#1: Visibility of system status The design should always keep users informed about what is going on, through appropriate feedback within a reasonable amount of time.
#2: Match between system and the real world The design should speak the users' language. Use words, phrases, and concepts familiar to the user, rather than internal jargon.
#3: User control and freedom Users often perform actions by mistake. They need a clearly marked "emergency exit"
Good error messages are important, but the best designs carefully prevent problems from occurring in the first place. Either eliminate error-prone conditions, or check for them and present users with a confirmation option before they commit to the action.
Well, if you write the way I do, which means that you start something and you rewrite it, especially the beginning part, you rewrite it time and time and time again, at some point you feel, not that it’s beyond repair, but that there’s nothing more you can do to fix it. In other words, as far as you’re concerned, that’s what you were going to write.
We must remember quantitative data is only relevant to the actions currently available and therefore has the potential to limit our thinking.
people set higher standards of evidence for hypotheses that go against their current expectations. We tend to search for, interpret, favor, and recall information in a way that confirms or supports our prior beliefs or values.
“pay attention to what users do, not what they say”. User actions and feedback is seldomly aligned, and this can introduce inaccurate data in the design process if we rely too much on user feedback alone.
Meadows defines a system as “an interconnected set of elements that is coherently organized in a way that achieves something.”
I’d suggest that the interconnections in a design system include design constraints, the feedback loop between implements of the design system and the system’s consumers, the cadence at which implementers update documentation, the amount of money the company invests in the design system, and the way that the component library reads design tokens.
Meadows points out that “purposes are deduced from behavior, not from rhetoric or stated goals,” meaning that the only real way to understand a system’s purpose is by watching its behavior. As implementers of design systems, we often try to dictate the system’s purpose. We might state goals like: “the design system will improve design consistency.” Unfortunately, saying it doesn’t make it true. We need to make sure that we understand the different elements and interconnections in our system. Modifying the way elements are interconnected can allow us to adjust the purpose of the system — but we need to make sure that we understand it first.
Tiny Wins are often shortcuts. They save a user’s time by getting rid of existing steps — physical or mental — required to perform an action. This is a really useful way to think about the types of changes we saw above, and a good way to differentiate them from other low-hanging fruits that don’t belong on your lists.
Tiny Wins are high impact. They affect things that the majority of users interact with on a regular basis. If the change won’t have the compounding effects we’ve discussed, it doesn’t belong on the list.
Tiny Wins are low effort. These projects are straightforward, scoped, and takes a short amount of time. If the change requires a significant amount of time and effort, it doesn’t belong on the list.
Tiny Wins are standalone. These changes are small, scoped, and provide their own value. If the change can’t be appreciated on its own, it doesn’t belong on the list.
I noticed something weird about the issues we solved with these changes. They were almost never reported. Hundreds of people were ecstatic when we added that arrow to PR pages. Out of those, not a single one indicated that this flow was confusing. A lot of people assumed it was their own fault for not just “getting” it. Others get so accustomed to these flows that they don’t even notice their anxiety. If they do, it’s just part of life. The status quo. Something to live with, not improve.
MVPs and iteration are powerful tools that should be leveraged by companies looking to move quickly. But Tiny Wins are much more potent when it comes to filling the gaps, improving retention, and nurturing your community of users.
Because of this, these changes were perceived and acknowledged as fresh, complete features. They communicated to users that they were being listened to. These features bred excitement, goodwill, and likely loyalty towards their respective companies. Hell, they probably even contributed to some organic growth.
“You’re efficient when you do something with minimum waste. And you’re effective when you’re doing the right something.”
DeMarco defines slack as “the degree of freedom required to effect change. Slack is the natural enemy of efficiency and efficiency is the natural enemy of slack.” Elsewhere, he writes: “Slack represents operational capacity sacrificed in the interests of long-term health.”
nesslabs.com
Read 3 highlights
Decision fatigue can lead to poor choices and irrational trade-offs in decision making. It refers to the deteriorating quality of decisions after a long session of decision making. Too many decisions end up depleting our willpower, to the point that we end up making increasingly poor choices.
Anne-Laure Le Cunff
Read 1 highlight
Key sections to include in your design spec: Vision. Why is this feature important? Why is this feature a priority? Objectives and key results. What is required for this feature to be complete and what metrics should be measured for success? Deliverables and release plan. What user interfaces are being shipped and when? Does it require changes to on-boarding, documentation, billing, and/or marketing? Dependencies. What other non-existing features or functionality is required before it can be released? Open questions. What needs more clarification?
Creativity is the ability to imagine something that isn’t there — yet. Its core ingredient are ideas. An idea is a new thought that springs up from existing knowledge. Thoughts collide — something “clicks” in our mind.
If you want more ideas, it helps to have more knowledge. We first immerse ourselves in a topic and learn about a subject as much as possible. The more thoughts we have on what is, the likelier it is that some of them combine to an image of what could be.
When the right information is captured at the right level of detail and in the right places, it’s not wasteful. Good documentation leads to better and faster decision making, aids in presenting and justifying design decisions, and reduces the cognitive load for the team by acting as a form of external memory.
Taxonomies may be thought of as hierarchies of categories to group and organize information to be found when browsing, or as a structured set of terms used to tag content so that it can be retrieved efficiently and accurately. Sometimes the same taxonomy may serve both purposes, and sometimes two different taxonomies are used, one for each purpose, for the same content or site.
Understanding the users and their tasks and needs is a foundation for all things UX. Taxonomy building is not any different. …Who are the users? What are they trying to do? How do they currently tackle this problem? What works and what doesn’t?
Understanding the users is of central importance, so let’s consider specifically two techniques we can use to make a taxonomy more suitable for its users: (1) adapting the names or labels of the taxonomy concepts (terms) to the language of the users, and (2) adapting the categorization hierarchy to the expectation of the users.
Systems thinking is the ability or skill to perform problem-solving in complex systems. In application, it has been defined as both a skill and an awareness. A system is an entity with interrelated and interdependent parts; it is defined by its boundaries and is more than the sum of its parts. Changing one part of the system affects other parts and the whole system, with predictable patterns of behaviour.
Systems thinking replaces Design Thinking’s reductionism (the belief that everything can be reduced to individual parts) with: Expansionism (the belief that a system is always a sub-system of some larger system) Analysis (gaining knowledge of the system by understanding its parts) Synthesis (explaining its role in the larger system of which it is a part)
What system thinking really means in design is this: every decision you make affects other past and future decisions. Every pixel you draw is bound by rules governed by your previous choices. When you make a change — no matter how big or small — it may reverberate through your entire design and require other updates to keep the coherence of the system intact.
When you see a design that looks messy and inconsistent, it’s the result of a lack of system thinking. When your design breaks when it’s stretched beyond the happy path, your incomplete system thinking is to blame.
Working atomically and making things reusable early and often — this automates system-wide changes and helps you avoid rework and unintended consequences.
Respecting the butterfly effect. Is a decision in isolation, or will its impact reverberate across other areas of your system? Never make a design decision without first understanding the scope of its consequences.