And that’s not an accident. [One of the most famous studies](https://web.mit.edu/5.95/www/readings/bloom-two-sigma.pdf) in educational psychology found that students who learned through 1-1 tutoring performed two sigma—98%—better than students who learned through a traditional classroom environment.
1-1 tutoring is extremely valuable, but it’s totally different than taking a class. I had to bring a lot more to the table to get what I wanted out of the experience. When you’re doing tutoring with someone who doesn’t teach professionally they won’t have a course structure or plan. So I had to suggest a structure, bring work in that I wanted to review, identify skills I wanted to build, and follow through by making progress on my own between tutoring sessions.
When it comes to reading, you don’t need to finish what you start. Once you realize that you can quit bad books (or reading anything for that matter) without guilt, everything changes. Think of it this way: **All the time you spend reading a bad book comes at the expense of a good book.** Skim a lot of books. Read a few. Immediately re-read the best ones twice.
But the confidence, like a retweeted Beeple, is somehow false. I don’t really *own* the idea. It’s not in my wallet. I don’t know its corners, its edges, or its flaws. I’m just pasting it on top of my decision to make it look like I do. The mental model isn’t actually helping me in any way. It’s just decorating my decision. It helps me impress myself, and other people.
The way to get rid of the bullshit and the LARPing is to honestly attempt to connect the mental model in your head to the results in the world—if you do this enough, real understanding will start to click into place. In short, just having experiences and using fancy words doesn’t actually teach you anything. You have to *reflect* on your experiences to generate actual understanding. This is a process he calls [the Learning Loop](https://www.youtube.com/watch?v=iPkBuTpz3rc): having experiences, reflecting on those experiences, and using them to refine your model of the world so that you can do better next time.
We tend to think that we learn through having an experience but that’s not how we learn at all. We learn by reflecting on an experience.
It works in a cycle that I call the ‘learning loop’. Think about a clock: at twelve o’clock on the dial, you have an experience. At three o’clock, you reflect upon that experience. At six, that reflection creates an abstraction—a mental model—and at nine, you go on to take action based on that. Draw little arrows between them, and you can visualize this loop of learning. [](https://d24ovhgu8s7341.cloudfront.net/uploads/editor/posts/1653/optimized_Nk-Q7LAf9cC0tGnmAkk2N7Gn95ult4VI5WOlKroBUfRv8cp6PA9WNvlt_7Lt-OU0-yS8dU2CT-37Cxx1Rx3f2sBWeG_SWJPbQDZ4OkqS9lkbl2tJuR0E_E6xPAjylbFKz5KEZn8P.png?link=true)
You can consume someone else’s abstractions all day long, but it doesn’t mean much unless you understand how they arrived at the conclusions.. In other words, you need to go out into the world and do things you can reflect on in order to truly learn and create your own mental models. If you’re talking to someone else, you need to ask them detailed questions. What was their experience? What variables do they think matter? How do those variables interact over time? What do they know that most other people don’t? For your experiences, I recommend writing them down. And by the way, trying to explain something in writing is a powerful way to approach learning. Writing can teach us to reflect—it slows us down, shows us what we don’t understand, and makes us aware of the gaps in our knowledge.
Decision journals help you … reflect. And reflection is the key to learning. Here’s what you do. You make a decision about something, and you write it down—in your own writing, not on a computer—along with all the reasons why you’ve made it. You try to keep track of the problem you’re trying to solve for and its context, what you expect to happen and why, and the possible complications. It’s also important to keep track of the time of day that you’re making the decision, and how you’re feeling. Then you sleep on it. Don’t tell anyone. Just sleep on it. When you wake up fresh in the morning, you go back to the journal, read what you were thinking, and see how you feel about what you decided. What you’re doing is slowing down. You’re not implementing your decisions immediately, based solely on intuition or instinct—you’re giving yourself that night of sleep to dampen the emotions around certain aspects of the decision, and perhaps to heighten others. You’ll be able to filter what’s important from what isn’t so much more effectively
But I find that Anki makes me good at remembering the answers to Anki cards—rather than bringing the knowledge contained in them into the world and into my writing.
The key thing to note here, though, is that the ideal copilot isn’t just referencing any relevant book or fact when it tries to help you. It’s referencing *your* books and your notes when you’re working with it.
**Privacy and IP concerns.** Many users are going to be hesitant about uploading notes or highlights or journal entries to models like these—for good reason. I suspect these use cases will start to take off when high-quality LLM experiences are available to run natively on your phone or laptop, instead of forcing you to send your data to a cloud API for completion.
**An actually good user experience.** What you want is a UX where copilot completions are shown in a frictionless way that feels *helpful* instead of annoying. GitHub CoPilot nailed this for programming, so I believe it’s possible for other use cases. But it’s a balancing act. For more, read last week’s essay “Where Copilots Work.”
inkandswitch.com
Read 5 highlights
Writers often prefer to initially ideate in private and share the result with their collaborators later, when they are ready.
we found that our interviewees also had significant reservations about real-time collaboration. Several writers we talked to wanted a tool that would allow them to work in private, with no other collaborators reading their work in progress. Intermediate drafts aren’t always suitable to share, even with collaborators, and feedback on those drafts can be unwelcome or even embarrassing. In addition, some writers are troubled by the idea that their senior co-workers and management may be monitoring them – an unintended negative side effect of real-time collaboration.
Other writers reported putting their device into offline (airplane) mode to prevent their edits being shared while they worked.
With this approach of integrating AI into our creative workflows, the AI is always subordinate to human users. It has no agency but that which is granted exactly and literally by the human operator.
In this model, the human and the AI are two **independent, autonomous agents at equal level of engagement with the work** being produced, and they have access to the same interaction mechanics and tools to accomplish the task together. Working with this kind of AI is like working with a smart human collaborator – you don’t invoke them to help you accomplish something specific they’re there to do; you learn how they think, they learn how you think, and you develop a sense of how to produce the best ideas together. The collaboration is much more organic, and there’s a constant feedback loop informing both participants about the ever-changing creative direction.
Recently lots of people have been trying very hard to make large language models like ChatGPT into better *oracles*—when we ask them questions, we want the perfect answer. As an example, in my [last post](https://www.geoffreylitt.com/2023/01/29/fun-with-compositional-llms-querying-basketball-stats-with-gpt-3-statmuse-langchain.html), I explored some techniques for helping LLMs answer complex questions more reliably by coordinating multiple steps with external tools. I’ve been wondering, though, if this framing is missing a different opportunity. **What if we were to think of LLMs not as tools for answering questions, but as tools for *asking* us questions and inspiring our creativity?** Could they serve as on-demand conversation partners for helping us to develop our best thoughts? As a creative *muse*?
Recently lots of people have been trying very hard to make large language models like ChatGPT into better *oracles*—when we ask them questions, we want the perfect answer. As an example, in my [last post](https://www.geoffreylitt.com/2023/01/29/fun-with-compositional-llms-querying-basketball-stats-with-gpt-3-statmuse-langchain.html), I explored some techniques for helping LLMs answer complex questions more reliably by coordinating multiple steps with external tools. I’ve been wondering, though, if this framing is missing a different opportunity. **What if we were to think of LLMs not as tools for answering questions, but as tools for *asking* us questions and inspiring our creativity?** Could they serve as on-demand conversation partners for helping us to develop our best thoughts? As a creative *muse*?
• *Yes, you can capture facts in your zettelkasten* • *Yes, you should restate them in your own words, and create new notes where you actually say something* about *the fact*
As you record facts in your zettelkasten, consider creating new notes so you can speak *about* the fact itself. By providing additional commentary, you can better integrate the information into your broader understanding of the topic, enhancing both your comprehension and your ability to write about the topic effectively.
Commenting can take many forms in your notes.[3](https://writing.bobdoto.computer/how-to-handle-facts-in-your-zettelkasten#fn-3) The most obvious (and regarded) are comments that specifically relate different ideas to one another. But, other kinds of comments may prove valuable, as well. Comments about how a fact shows up in your daily life, how a fact is regarded in public discourse, how a fact is disputed, all make for valuable content. The important thing is to bring the fact into contact with your own thinking. It's what you have to say about facts that matters most.
At its simplest, the trust thermocline represents the point at which a consumer decides that the mental cost of staying with a product is outweighed by their desire to abandon it. This may seem like an obvious problem, yet if that were the case, this behavior wouldn’t happen so frequently in technology businesses and in more traditional firms that prided themselves on consumer loyalty, such as car manufacturers and retail chains.
Trust thermoclines are so dangerous for businesses to cross because there are few ways back once a breach has been made, even if the issue is recognized. Consumers will not return to a product that has breached the thermocline unless significant time has passed, even if it means adopting an alternative product that until recently they felt was significantly inferior.
Anne-Laure Le Cunff
Read 2 highlights
We confuse hard work for high-leverage work. These low-leverage tasks don’t meaningfully contribute to our success, and they certainly don’t contribute to our well-being.
Moving the needle may imply a corresponding level of hard work; which is not the case with high-leverage activities. This is the basic principle of leverage: using a lever amplifies your input to provide a greater output. Good levers work as energy multipliers. Instead of moving the needle, you want to operate the most efficient levers.
His response changed my life. It was a simple thing. He said “Man, give it five minutes.” I asked him what he meant by that? He said, it’s fine to disagree, it’s fine to push back, it’s great to have strong opinions and beliefs, but give my ideas some time to set in before you’re sure you want to argue against them. “Five minutes” represented “think”, not react. He was totally right. I came into the discussion looking to prove something, not learn something.
But pilots are still needed. Likewise, designers won’t be replaced; they’ll become operators of increasingly complicated AI-powered machines. New tools will enable designers to be more productive, designing applications and interfaces that can be implemented faster and with less bugs. These tools will expand our brains, helping us cover accessibility and usability concerns that previously took hours of effort from UX specialists and QA engineers.
• The universe is the “source” of all creativity. It is the source of an energy that we all tap into. • The universe pushes this energy as “data” toward the artist. It’s a cacophony of emotions, visual stimuli, and sounds that the artist stores in a “vessel.” • The artist develops a “filter” to determine what is allowed to reside in the vessel. • The work of an artist is to shape their life so they can get closer to the source. • They channel that source into something of personal value.
“The objective is not to learn to mimic greatness, but to calibrate our internal meter for greatness,” he writes. “So we can better make the thousands of choices that might ultimately lead to our own great work.”
A common problem with which I struggle as a creator is how much to participate in the discourse. Many people make their living by having the spiciest take on the news of the day, and sometimes I wonder if I would be better off being a larger participant in the culture. Again, Rubin has useful advice: “It’s helpful to view currents in the culture without feeling obligated to follow the direction of their flow. Instead, notice them in the same connected, detached way you might notice a warm wind. Let yourself move within it, yet not be *of* it.”
Perhaps the real magic of this book isn’t the advice itself. It is generic. It *is* anodyne. But maybe that’s the point. *The Creative Act* isn’t an advice book. It is artistic permission given in written form. What makes this book so magical is that he somehow translates his gift in the studio to the page. Rubin’s task is not to tell you how to create or how to act. His book gives you permission to be yourself. As he says, “No matter what tools you use to create, the true instrument is you.”
All that time Gloria spends doing nothing isn’t wasted time. It’s slack: excess capacity allowing for responsiveness and flexibility. The slack time is important because it means she never has a backlog of tasks to complete. She can always deal with anything new straight away. Gloria’s job is to ensure Tony is as busy as he needs to be. It’s not to be as busy as possible. **If you ever find yourself stressed, overwhelmed, sinking into stasis despite wanting to change, or frustrated when you can’t respond to new opportunities, you need more slack in your life.**
DeMarco defines slack as “*the degree of freedom required to effect change. Slack is the natural enemy of efficiency and efficiency is the natural enemy of slack.*” Elsewhere, he writes: “*Slack represents operational capacity sacrificed in the interests of long-term health*.”
But my success has also happened because I’ve given myself *space.* I ignore all the extra things I’m “supposed to do” that I mentioned above so I can pursue something called “afflatus.” Afflatus is a Latin word that refers to a sudden rush or inspiration, seemingly from the divine or supernatural. Moments of afflatus are euphoric and intoxicating. When they occur and I create output, I always end up happier.
I’m not advocating for a lifestyle of ease and no work. I work so, so hard to make this writing happen every week. There are always late nights and sacrifices. What I’m arguing for is the cultivation of a state of being to allow for afflatus to occur.
My wife shared a Kurt Vonnegut interview with me in which the author discusses going to buy some [envelopes](https://www.cbsnews.com/news/god-bless-you-mr-vonnegut/). > “Oh, she says well, you're not a poor man. You know, why don't you go online and buy 100 envelopes and put them in the closet? > And so I pretend not to hear her. And go out to get an envelope because I'm going to have a hell of a good time in the process of buying one envelope. > I meet a lot of people. And, see some great looking babes. And a fire engine goes by. And I give them the thumbs up. And, and ask a woman what kind of dog that is. And, and I don't know...And, of course, the computers will do us out of that. And, what the computer people don't realize, or they don't care, is we're dancing animals. You know, we love to move around. And, we're not supposed to dance at all anymore.” We are dancing animals, not quick-sync meeting animals.
A CRM is an essentialist piece of software. A CRM knows the essential objects in the world that it needs to care about: customer, company, and geography. It creates information structures to represent those objects, and then relates them together in a unified and standardized way. A CRM is creating a little model of one corner of reality. A notes app is not essentialist in the same way. Yes, it has a notebook and note structure but those are more or less unopinionated containers. When it comes down to the actual information contained inside of those notes, it throws its hands up and says, “I don’t know the structure!” and just gives you a big blank box to throw all of the information into.
The more precisely we know what to use a piece of information for, the more precisely we can organize it.
Notes, in the broadest sense, are not like this. They cannot be depended on to be part of a standard, well-defined process. A piece of information is a note when you have only a vague idea of how it will be used. Or, when you have one idea of how it will be used, but you think there may be many more ways it could be used down the road, too — it’s hard to predict.
What we learned earlier is that the less you can predict how you’ll use information the more flexible the system you’ll need to organize it. The more you can predict how you’ll use information the less flexible the system you’ll need. [](http://d24ovhgu8s7341.cloudfront.net/uploads/editor/posts/1085/optimized_cbc9a058-d0de-4f68-9fad-cfc3bc0b6d48_1700x458.png)
AI changes this equation. A better way to unlock the value in your old notes is to use intelligence to surface the right note, at the right time, and in the right format for you to use it most effectively. When you have intelligence at your disposal, you don’t need to organize.
For an old note to be helpful it needs to be presented to Future You in a way that *clicks* into what you’re working on instantly—with as little processing as possible.
Think about starting a project—maybe you’re writing an article about a new topic—and having an LLM automatically write and present to you a report outlining key quotes and ideas from books you’ve read that are relevant to the article you’re writing. [](https://d24ovhgu8s7341.cloudfront.net/uploads/editor/posts/2424/optimized_w2LUeYh9IWiuzyK3nMwy2K36_ILuRE8moIeVX_pnhnNcAdnDdRvzz0X3A90WU05q7x9hpfYoYBXNJGUJD6_plOfG2V7QnOWX9DDJJhXQxs98BWV1UoDfYKKGbeXLfgP5ycNs1GZPtGuKlePVnpFKHOO-4i6nEIq1WpYyGGqeUPp3i2suD4HrYEFLsya-gQ.png?link=true)
Research reports are valuable, but what you really want is to mentally download your entire note archive every time you touch your keyboard. Imagine an autocomplete experience—like GitHub CoPilot—that uses your note archive to try to fill in whatever you’re writing. Here are some examples: • When you make a point in an article you’re writing, it could suggest a quote to illustrate it. • When you’re writing about a decision, it could suggest supporting (or disconfirming) evidence from the past. • When you’re writing an email, it could pull previous meeting notes to help you make your point. An experience like this turns your note archive into an intimate thought partner that uses everything you’ve ever written to make you smarter as you type.
Keeping track of our thoughts in that regard can be tricky, but there’s a single principle which will absolutely make it easier: that of atomicity. A thought has to be graspable in one brief session, otherwise it might as well not be there at all. The way to achieve this is to ensure that there’s nothing else you can possibly take away from it: make it irreducible.
The killer feature is that wikis make it *trivially easy to break information into chunks*, by creating a new page at any time, and they then allow you (equally trivially) to refer to that information from anywhere. It is the inherent focus on decomposition and atomicity which makes a wiki — or any broadly similar structure, in terms of unrepeated and irreducible units of thought — so incredibly powerful.
The first thing to explain is that what ChatGPT is always fundamentally trying to do is to produce a “reasonable continuation” of whatever text it’s got so far, where by “reasonable” we mean “what one might expect someone to write after seeing what people have written on billions of webpages, etc.”
One might think it should be the “highest-ranked” word (i.e. the one to which the highest “probability” was assigned). But this is where a bit of voodoo begins to creep in. Because for some reason—that maybe one day we’ll have a scientific-style understanding of—if we always pick the highest-ranked word, we’ll typically get a very “flat” essay, that never seems to “show any creativity” (and even sometimes repeats word for word). But if sometimes (at random) we pick lower-ranked words, we get a “more interesting” essay.
And we might imagine that if we were able to use sufficiently long *n*-grams we’d basically “get a ChatGPT”—in the sense that we’d get something that would generate essay-length sequences of words with the “correct overall essay probabilities”. But here’s the problem: there just isn’t even close to enough English text that’s ever been written to be able to deduce those probabilities.
In a [crawl of the web](https://commoncrawl.org/) there might be a few hundred billion words; in books that have been digitized there might be another hundred billion words. But with 40,000 common words, even the number of possible 2-grams is already 1.6 billion—and the number of possible 3-grams is 60 trillion. So there’s no way we can estimate the probabilities even for all of these from text that’s out there. And by the time we get to “essay fragments” of 20 words, the number of possibilities is larger than the number of particles in the universe, so in a sense they could never all be written down.
Say you want to know (as [Galileo did back in the late 1500s](https://archive.org/details/bub_gb_49d42xp-USMC/page/404/mode/2up)) how long it’s going to take a cannon ball dropped from each floor of the Tower of Pisa to hit the ground. Well, you could just measure it in each case and make a table of the results. Or you could do what is the essence of theoretical science: make a model that gives some kind of procedure for computing the answer rather than just measuring and remembering each case.
OK, so how do our typical models for tasks like [image recognition](https://writings.stephenwolfram.com/2015/05/wolfram-language-artificial-intelligence-the-image-identification-project/) actually work? The most popular—and successful—current approach uses [neural nets](https://reference.wolfram.com/language/guide/NeuralNetworks.html). Invented—in a form remarkably close to their use today—[in the 1940s](https://www.wolframscience.com/nks/notes-10-12--history-of-ideas-about-thinking/), neural nets can be thought of as simple idealizations of how [brains seem to work](https://www.wolframscience.com/nks/notes-10-12--the-brain/).
In human brains there are about 100 billion neurons (nerve cells), each capable of producing an electrical pulse up to perhaps a thousand times a second. The neurons are connected in a complicated net, with each neuron having tree-like branches allowing it to pass electrical signals to perhaps thousands of other neurons. And in a rough approximation, whether any given neuron produces an electrical pulse at a given moment depends on what pulses it’s received from other neurons—with different connections contributing with different “weights”.
OK, but how does a neural net like this “recognize things”? The key is the [notion of attractors](https://www.wolframscience.com/nks/chap-6--starting-from-randomness#sect-6-7--the-notion-of-attractors). Imagine we’ve got handwritten images of 1’s and 2’s:  We somehow want all the 1’s to “be attracted to one place”, and all the 2’s to “be attracted to another place”. Or, put a different way, if an image is somehow “[closer to being a 1](https://www.wolframscience.com/nks/notes-10-12--memory-analogs-with-numerical-data/)” than to being a 2, we want it to end up in the “1 place” and vice versa.
We can think of this as implementing a kind of “recognition task” in which we’re not doing something like identifying what digit a given image “looks most like”—but rather we’re just, quite directly, seeing what dot a given point is closest to.
So how do we do this with a neural net? Ultimately a neural net is a connected collection of idealized “neurons”—usually arranged in layers—with a simple example being:  Each “neuron” is effectively set up to evaluate a simple numerical function. And to “use” the network, we simply feed numbers (like our coordinates *x* and *y*) in at the top, then have neurons on each layer “evaluate their functions” and feed the results forward through the network—eventually producing the final result at the bottom: 
For each task we want the neural net to perform (or, equivalently, for each overall function we want it to evaluate) we’ll have different choices of weights. (And—as we’ll discuss later—these weights are normally determined by “training” the neural net using machine learning from examples of the outputs we want.)
Whatever input it’s given, the neural net is generating an answer. And, it turns out, to do it a way that’s reasonably consistent with what humans might do. As I’ve said above, that’s not a fact we can “derive from first principles”. It’s just something that’s empirically been found to be true, at least in certain domains. But it’s a key reason why neural nets are useful: that they somehow capture a “human-like” way of doing things.
But let’s say we want a “theory of cat recognition” in neural nets. We can say: “Look, this particular net does it”—and immediately that gives us some sense of “how hard a problem” it is (and, for example, how many neurons or layers might be needed). But at least as of now we don’t have a way to “give a narrative description” of what the network is doing. And maybe that’s because it truly is computationally irreducible, and there’s no general way to find what it does except by explicitly tracing each step. Or maybe it’s just that we haven’t “figured out the science”, and identified the “natural laws” that allow us to summarize what’s going on. We’ll encounter the same kinds of issues when we talk about generating language with ChatGPT.
Apple Computer, Inc
Read 66 highlights
A human interface is the sum of all between the computer and the user. It's communication what presents information to the user and accepts information from the user. It's what actually puts the computer's power into the user's hands.
The Apple Desktop Interface is the result of a great deal of concern with the human part of human-computer interaction. It has been designed explicitly to enhance the effectiveness of people. This approach has frequently been labeled userfriendly, though user centered is probably more appropriate.
The Apple Desktop Interface is based on the assumption that people are instinctively curious: they want to learn, and they learn best by active self-directed exploration of their environment. People strive to master their environment: they like to have a sense of control over what they are doing, to see and understand the results of their own actions. People are also skilled at manipulating symbolic representations: they love to communicate in verbal, visual, and gestural languages. Finally, people are both imaginative and artistic when they are provided with a comfortable context; they are most productive and effective when the environment in which they work and play is enjoyable and challenging.
Use concrete metaphors and make them plain, so that users have a set of expectations to apply to computer environments.
Most people now using computers don't have years of experience with several different computer systems. What they do have is years of direct experience with their immediate world. To take advantage of this prior experience, computer designers frequendy use metaphors for computer processes that correspond to the everyday world that people are comfortable with.
Once immersed in the desktop metaphor, users can adapt readily to loose connections with physical situations —the metaphor need not be taken to its logical extremes.
People appreciate visual effects, such as animation, that show that a requested action is being carried out. This is why, when a window is closed, it appears to shrink into a folder or icon. Visual effects can also add entertainment and excitement to programs that might otherwise seem dull. Why shouldn't using a computer be fun?
Users rely on recognition, not recall; they shouldn't have to remember anything the computer already knows.
Most programmers have no trouble working with a command-line interface that requires memorization and Boolean logic. The average user is not a programmer.
It is essential, however, that keyboard equivalents offer an alternative to the see-and-point approach —not a substitute for it. Users who are new to a particular application, or who are looking for potential actions in a confused moment, must always have the option of finding a desired object or action on the screen.
To be in charge, the user must be informed. When, for example, the user initiates an operation, immediate feedback confirms that the operation is being carried out, and (eventually) that it's finished.
This communication should be brief, direct, and expressed in the user's vocabulary rather than the programmer's.
Even though users like to have full documentation with their software, they don't like to read manuals (do you?). They would rather figure out how something works in the same way they learned to do things when they were children: by exploration, with lots of action and lots of feedback.
Users feel comfortable in a computer environment that remains understandable and familiar rather than changing randomly.
Visually confusing or unattractive displays detract from the effectiveness of human-computer interactions.
Users should be able to control the superficial appearance of their computer workplaces —to display their own style and individuality.
Animation, when used sparingly, is one of the best ways to draw the user's attention to a particular place on the screen.
With few exceptions, a given action on the user's part should always have the same result, irrespective of past activities.
Modes are contexts in which a user action is interpreted differently than the same action would be interpreted in another context.
Because people don't usually operate modally in real life, dealing with modes in computer environments gives the impression that computers are unnatural and unfriendly.
A mode is especially confusing when the user enters it unintentionally. When this happens, familiar objects and commands may take on unexpected meanings and the user's habitual actions cause unexpected results.
Direct physical control over the work environment puts the user in command and optimizes the "see-and-point" style of interface.
Simply moving the mouse just moves the pointer. All other events —changes to the information displayed on the screen—take place only when the mouse button is used.
The changing pointer is one of the few truly modal aspects of the Apple Desktop Interface: a given action may yield quite different results, depending on the shape of the pointer at the time.
There is always a visual cue to show that something has been selected. For example, text and icons usually appear in inverse video when selected. The important thing is that there should always be immediate feedback, so the user knows that clicking or dragging the mouse had an effect.
Apple's goal in adding color to the Desktop Interface is to add meaning, not just to color things so they "look good." Color can be a valuable additional channel of information to the user, but must be used carefully, otherwise, it can have the opposite of the intended effect and can be visually overwhelming (or look gamelike).
In traditional user interface design, color is used to associate or separate objects and information in the following ways: discriminate between different areas show which things are functionally related show relationships among things identify crucial features
Furthermore, when colors are used to signify information, studies have shown that the mind can only effectively follow four to seven color assignments on a screen at once.
The most illegible color is light blue, which should be avoided for text, thin lines, and small shapes. Adjacent colors that differ only in the amount of blue should also be avoided. However, for things that you want to make unobtrusive, such as grid lines, blue is the perfect color (think of graph paper or lined paper).
My archive became opaque like the sea: You can see a couple inches into the deep but you know there is much more that you can’t access. You can dive deep, but still you just see a couple of inches at any time. Therefore, I thought of it in terms of unexplored territory for which I need mapping methods and such.
They look much like a table of contents. It’s because they *are* tables of contents. A table of contents is a structured set of chapters of a book, a set with hierarchy and order. Of course, a book’s page sequence is ordered according to the table of contents for the reader’s convenience. A structure note doesn’t need to adhere to any didactic needs or any needs other than yours.
After a while, I did not only have structure notes that structure content notes, I also had structure notes that mainly structured sets of structure notes. They became my top level structure notes because they began to float on the top of my archive, so to say.
A sign of not dealing with structural layers are project folders, and folders in general. If you can’t cope with potentially infinite complexity you have to compensate. One way of compensation is lowering the demands on the system. If a system encapsulates single projects or topics, chances are that it can’t cope with complexity. This is okay if you want to just work on one project. But if you want to use a system as an aid to writing and as a thinking tool you should opt for a system that is powerful enough for a lifetime of thoughts. So, watch out for folders and projects. They are the means for dealing with encapsulating and limiting complexity. In addition, they hinder the most productive way of knowledge production: the interdisciplinary part.
Folgezettel create one single hierarchy. Its meaning is minimized by the arbitrariness of the position: You can put a Zettel in one position or another. It is not important as long as you link from the other position to the Zettel. Structure Zettel on the other hand do not introduce one single hierarchy but *the possibility of indefinite hierarchies*. If there are indefinite hierarchies, the position of each hierarchy has zero importance to the individual Zettel. You can make it part of one hierarchy, or another, or both. You can even create new hierarchies. In this second difference lies the advantage in power of Structure Zettel over Folgezettel.
Instead of remembering individual Zettels, you would enter the Zettelkasten at a point that seems associated with the topic your are thinking about, then you’d follow the links. This is exactly how our own memory works: Mostly you don’t just recall what you memorized but surf through the associations until you are satisfied with what you loaded into your working memory.
zettelkasten.de
Read 2 highlights
• The Folgezettel technique realizes two things: Hierarchy and direct linking. This hierarchy, however, is meaningless. It is a hierarchy on paper only because you don’t file one Zettel as a true child under a true parent but just placing it at a place that seems fair enough because it is somehow related to his parent. • The Structure Zettel technique creates hierarchies. Direct linking is possible via unique identifiers. • You can replicate the Folgezettel technique with no loss of functionality with the Structure Zettel technique. • By using the Folgezettel technique, you create a single general hierarchy (enumerated, nested list) for your Zettelkasten. The same would be true if you create a Master Structure Zettel that entails all your Zettel.
A good implementation of the Zettelkasten Method is lean. This means that you are fully focused on knowledge-based value creation.
Linking should be done in such a way that knowledge is created. For this very reason, it is necessary to create a precise link descriptions (I call those “link contexts”). These descriptions themselves are new knowledge and not merely something you do for your Zettelkasten to work properly.
Incorporating new notes into [structure notes](https://zettelkasten.de/introduction/#structure-notes) is not merely about making the note retrievable. Incorporation of the individual note is about relating it to a higher, more general structure. This improves the utility of the structure note by making it a better entry point, tool box, overview or whatever you are using it for.
There are two ways to get respect for your taste. The first is Rubin's way, where you have such a grasp on what you like that it influences how other people like it. The second is having a such pedigree in the work you've done in your craft that people respect your taste. As a designer and builder, the second one is your greatest power.
having taste can be the differentiator between what you make vs. an interface generated by Artificial Intelligence (AI).
“Nobody tells this to people who are beginners, I wish someone told me. All of us who do creative work, we get into it because we have good taste. But there is this gap. For the first couple years, you make stuff, it’s just not that good. It’s trying to be good, it has potential, but it’s not. But your taste, the thing that got you into the game, is still killer. And your taste is why your work disappoints you. A lot of people never get past this phase, they quit. Most people I know who do interesting, creative work went through years of this. We know our work doesn’t have this special thing that we want it to have. We all go through this. And if you are just starting out or you are still in this phase, you gotta know it’s normal and the most important thing you can do is do a lot of work. Put yourself on a deadline so that every week you will finish one story. It is only by going through a volume of work that you will close that gap, and your work will be as good as your ambitions. And I took longer to figure out how to do this than anyone I’ve ever met. It’s gonna take a while. It’s normal to take a while. You’ve just gotta fight your way through.”
your taste will be the differentiator between you and other designers or software engineers in the craft of your work.
If you’re at a loss on how to develop taste, here are a few quick ideas of ways to practice: • Write as a form of critique. Whether it's about design aesthetics or delightful apps you’ve used recently, write about the attributes that connect it to the taste you have. You don’t even have to publish it. • Make mood boards of objects that have similar creative attributes. Can you find a piece of furniture that has similar aesthetics to a piece of hardware or software? • When listening to music you like, break down what makes you develop the taste. Is it the type of vocals, rhythm, lyrics, or something else?
There are very few IDs per item in his register, sometimes just a single one, next to each term. His register is purely a list of entry points, not a tag list.
The register was only a list of possible entry points to the biggest and most important clusters of notes.
In contrast, referring to an atomic note is unambiguous: when you reference it, you will know what the ‘thought’ is. There should be no room for guesswork. That is what the rule of atomicity means: Make sure that the layer of content and the boundaries between notes match and are well defined. Then and only then can it be a reference to an address identical to referencing a thought.
The difference between knowledge and information, in practice, is quite simple. Information could be summarized in one sentence most of the time. Most of the time, it is “dead”. Information just *is*.
Sometimes, however, you will refer to other Zettel as your source of inspiration. In that case, you base your thoughts on something you have already processed in the past. You reference the Zettel by linking to it via the ID, connecting the new to the old.
If you have no reference at all, perhaps that means you wandered through the forest and had a sudden insight about the true nature of the world. In that case, you don’t need to do anything in this reference section. If no reference is given in a Zettel, it is your own thought by default.
To make the most of a connection, always state explicitly why you made it. This is *the link context*. An example link context looks like this: [](https://zettelkasten.de/introduction/202010271850_link-context.png)
This type of connection is where one of the leading knowledge creation mechanisms of the Zettelkasten is found: The meaning of the link, the *why* of the link, is explicit. The explicit statement of the why is created knowledge. If you just add links without any explanation [you will not create knowledge](https://zettelkasten.de/posts/understanding-hierarchy-translating-folgezettel/). Your future self has no idea why he should follow the link. One might think *now* that the links are placed for a good reason. However, if you create a web of thoughts where you cannot be confident that following a link will lead you to something meaningful, then surfing your own thoughts will give you a feeling of disappointment. Your future self will judge its past self (you!) as unreliable.
Not each relevant Zettel was listed for each keyword. Only the most central Zettels that served as entry points to a topic.
Also, Luhmann had [hub notes](https://zettelkasten.de/posts/zettelkasten-hubs/). These are Zettels that list many other places to look at for a continuation of a topic. Luhmann’s Zettelkasten posed a severe challenge in getting to all the relevant parts of a search, especially compared to a digital Zettelkasten.
Whenever I write a new Zettel on the Zettelkasten Method, I make sure that I place a link to it on this Structure Note, or on a Structure Note that is itself referred to by the main Structure Note on the Zettelkasten Method.
To recap: To create Zettels about the relationship between other Zettels, is called a Structure Note. The practice of creating Structure Note will further train your ability to deal with general patterns of knowledge. Capturing the results in your Zettelkasten so they will be available for later use.
Actually, one more note about making way too many icons for clients to choose from. To automate this a little bit, I set up a Photoshop document that had a smart object for the glyph, and a variety of backgrounds. This wasn’t necessarily anything we presented to our clients, but it was a great tool for us to see if any color or style jumped out as something we should explore further.
Digital products have one crucial disadvantage over atom-based products and services: Intangibility. Apps live on your phone or computer. No one can see them except for you. The signal message of a fitness app is the same as that of a gym membership or athletic wear (strength & fitness display), but the signal is much weaker because you can’t distribute it to anyone.
But there’s a difference to other software products: **Superhuman has signal distribution built in.** Every time you send an email via Superhuman, your recipient will notice a little “Sent via Superhuman” in your signature. In a similar fashion, apps like Strava use their built in social networks as a signal distribution channel for their premium subscriptions. Users who have upgraded get a little premium badge and appear in exclusive premium leaderboards.
A social network like Path attempted to limit your social graph size to the Dunbar number, capping your social capital accumulation potential and capping the distribution of your posts. The exchange, they hoped, was some greater transparency, more genuine self-expression. The anti-Facebook. Unfortunately, as social capital theory might predict, Path did indeed succeed in becoming the anti-Facebook: a network without enough users. Some businesses work best at scale, and if you believe that people want to accumulate social capital as efficiently as possible, putting a bound on how much they can earn is a challenging business model, as dark as that may be.
Writing on small cards *forces* certain habits which would be good even for larger paper, but which I didn’t consider until the small cards made them necessary. It forces ideas to be [broken up into simple pieces](http://www.dansheffler.com/blog/2015-08-05-one-thought-per-note/), which helps to clarify them. Breaking up ideas forces you to link them together explicitly, rather than relying on the linear structure of a notebook to link together chains of thought.
Once you’re forced to adopt a linking system, it becomes natural to use it to “break out of the prison of the page” -- tangents, parentheticals, explanatory remarks, caveats, … everything becomes a new card. This gives your thoughts much more “surface area” to expand upon.

*Don’t get too caught up in what address to give a card to put it near relevant material. A card can be put anywhere in the address system.* The point is to make things more convenient for you; nothing else matters. Ideally, the tree would perfectly reflect some kind of conceptual hierarchy; but in practice, card 11c might turn out to be the primary thing, with card 11 just serving as a historical record of what seeded the idea.
In 2023, the scene is very different. Best practices in *most* forms of software and services are commodified; we know, from a decade plus of market activity, what works for most people in a very broad range of contexts. Standardization is everywhere, and resources for the easy development of UIs abound. It’s often the case that what the executives or PMs or engineers are imagining for an interface is *fine*, perhaps 75% of where it could be if a designer labored over it, and in some cases more. It’s also the case that if a designer adds 15% to a design’s quality but increases cycle time substantially, is another cook in the kitchen, demands space for ideation or research, and so on, the trade-off will surely start to seem debatable to *many* leaders, and that’s ignoring FTE costs! We can be as offended by this as we want, but the truth is that the ten millionth B2B SaaS startup can probably validate or falsify product-market-fit without hiring Jony Ive and an entire team of specialists.
Indeed, even where better UIs or product designs are possible, we now deal with a market of users who have developed familiarity with the standards; that 15% “improvement” may in fact challenge users migrating or switching from other platforms, or even just learning to use your software having spent countless hours using other, unrelated software.
A well-designed mind map is an overview of the experience that a product team is going to offer to the end user, and this overview helps designers to keep track of the most critical aspects of the interaction (such as what users will try to do in an app).
Are you mapping a current state (how a product works currently) or the future state (how you want it to work in the future)? Depending on the answer, you will build your map based on the design hypothesis (if you’re mapping the future experience of the product) or user research (if you’re mapping the current experience).
There is a simple technique that can help you to find all possible scenarios of an interaction. Use the, “As a user, I want to [do something]” technique. “Do something” will describe the action, and this action will be a candidate for the nodes of your mind map. But, remember that you need to focus on user needs, not features of your product.
The central object can be a feature of your product that you want to learn more about, or a specific problem to solve. All other objects will be subtopics of that starting point.
In essence, first principles thinking is breaking problems down into fundamental truths and constantly questioning what you think you know.
However, if the feature doesn’t meet the users needs then it won’t get used. If it doesn’t get used, it won’t provide much value to either the business or the user.
I think it’s important to reason from first principles rather than by analogy. The normal way we conduct our lives is we reason by analogy. **With analogy** we are doing this because it’s like something else that was done, or it is like what other people are doing. **With first principles** you boil things down to the most fundamental truths…and then reason up from there.”
The regimented daily pages notetaking routine made everything worse. It turned the writing process into a multi-year death march, filling the folders of my hard drive with unusable nonsense that I didn’t believe in then and don’t believe in now.
The job of *notes for creativity* is to: • Generate ideas in a structured way through research and sketching. • Preserve those ideas. • Explore the ideas until they have gelled into a cohesive plan or solved a problem.
The job of *notes for knowledge* is to: • Extend your memory to help you keep track of useful information (client data, meeting notes, references). • Connect that information to your current tasks or projects so that you can find it when you need it.
The job of *notes for understanding* is to: • Break apart, reframe, and contextualise information and ideas so that they become a part of your own thought process. • Turn learning into something you can outline in your own words.
Notes for creativity tend to favour loosely structured workspaces. Scrivener and Ulysses probably come the closest, though, in practice, I have doubts that either of them is loosely structured *enough*.
Notes for knowledge favour databases, ‘everything-buckets’ (apps that expect you to store *everything* in them), and hypertextual ‘link-everything’ note apps.
Notes for understanding tend to favour tools that have powerful writing or drawing features (which you favour will depend on your skill set and comfort).
Knowledge bases become too rigid to serve as workspaces for creativity. The creative spaces are too loosely structured to work well as knowledge bases. You can integrate writing and drawing tools in either, but that serves notetaking for understanding only up to a point. Most knowledge bases preserve too much detail and context, which gets in the way of reframing and contextualization. And too fully featured writing or drawing tools could make the creativity tools too complex to use.
They say *"don't let perfect be the enemy of good"*. When it comes to blogging on a personal site I'd also suggest to embrace the "good enough" mindset. There are situations where you want to spend time fine-tuning your writing, choosing the perfect word, and rewriting the same sentence until it's perfect. I'd argue that a personal blog is not the place for that. Not because it's not worth it but because it's not really necessary. Personal blogs to me are more like conversations. When you talk to someone you don't say the same thing four different times until you find the perfect phrase. You just talk, you communicate and if something is not clear you clarify it.
Explorers poke through the unknown, experimenting, trying many little dead-ends. Explorers meander, constantly changing directions based on hunch, mood, and curiosity. **Explorers are hard to follow.** It’s better to let them wander alone, then hear their tales. Explorers occasionally find a great place that would make a better home for many people. So that makes a job for a leader. Leaders are easy to follow. Leaders say, “Here’s where we’re going. Here’s why this will improve your life. Here’s how we’re going to get there. Let’s go.” Leaders sell the dream. Leaders describe the destination clearly and simply so it’s easy to understand and repeat. Even someone in the back of the pack, that can’t hear the leader, can follow along. **Leaders go in a straight line.** Leaders simplify. Explorers are bad leaders.
After a year of work, the CEO greenlighted a new type. We were ready to launch. However, someone in the product organization started demanding to A/B test the new typeface, which was already greenlighted. So we set up the A/B test. iOS and Android came back neutral; the Web came back slightly negative. We hypothesized that the new font added slightly longer load times because people didn’t have it cached. We tried to load both fonts to everyone and display the new font only to some to control the load time variable. We realized the A/B testing platform had a bug, and the experiment was not running properly. Our engineers spent weeks trying to debug. We couldn’t find any reason why the new typeface would perform worse. We developed it to make it more legible and work better in smaller sizes. We eventually launched the typeface regardless of the A/B testing. This moment made me write off A/B testing as a cargo cult or a way to avoid making decisions. We wasted weeks testing something where the only outcome was to launch it anyway.
Linear is a high-frequency product, similar to email clients, with daily or multi-short sessions. It’s a heavy front-end product, and most actions involve managing things in the UI. You need to make it fast and eliminate friction as much as possible. A tool like Linear is about communicating and coordinating work. Getting people to use the tool requires making it as frictionless as possible and, ideally, something they want to use instead of having to use it.
**Karri:** We do, and it’s a constant tension between our thinking and customer feedback. Since there are many existing tools, some of the feedback comes from the fact that people are used to things, not necessarily that they are the right way to do things. We internally talk about this tension as art and science. The science part is your research to understand the customer’s problems. The art part is you use your intuition and judgment to build. I’d say Linear is maybe 80% of our intuition and 20% of what people have asked for.
We also often default to the most constrained approach because it uncovers more reasons why the solution is too constrained. You should set constraints, see if people are hitting walls with them, and expand to find the optimal solution. If you start with the most flexible or expansive solution, you don’t get the feedback, and your users might use the feature in unintended ways.
If I asked you what music you like, the chances are the answers will be sporadic and unorganized—whatever is top of mind. If I instead asked, "who are the top five musicians of all time?" The ordered list of 1-5 forces critical thinking and ranking value vs. an unordered list. Creating a list is one of the simplest ways to build taste, debate, and put your opinion out there. Creating and publishing lists makes you exert your point of view on what's important. Whether it's a Top 10 year in review or the Mount Rushmore of Los Angeles Lakers players, it's human nature to rank.
Stephan Ango
Read 1 highlight
The hybrid path means developing expertise in two or more distinct areas. Having several specialities allows you to see patterns that no one else can see, and make contributions that no one else would think of. **The world needs more hybrid people.**
The T-shaped hybrid path is one that many curious people follow. You grow your skillset and experience in areas that are adjacent to your dominant expertise. For example engineering and design, or singing and dancing.
Hybrid people are important for the same reason that [composite materials](https://en.wikipedia.org/wiki/Composite_material) and [alloys](https://en.wikipedia.org/wiki/Alloy) are important. From [Wikipedia](https://en.wikipedia.org/wiki/Composite_material): > These constituent materials have notably dissimilar chemical or physical properties and are merged to create a material with properties unlike the individual elements. By becoming a hybrid, you can become greater than the sum of your skills. By becoming a hybrid you can choose how you want to be unique. Countless unique combinations are available to you.
Motivation, broadly speaking, operates on the erroneous assumption that a particular mental or emotional state is necessary to complete a task.
Discipline, by contrast, separates outwards functioning from moods and feelings *and thereby ironically circumvents the problem by consistently improving them*.
**Successful completion of tasks brings about the inner states that chronic procrastinators think they need to initiate tasks in the first place.** Put in simpler form, **you don’t wait until you’re in olympic form to start training. You train to get into olympic form.**
There is another, practical problem with motivation. It has a tiny shelf life, and needs constant refreshing.
By contrast, discipline is like an engine that, once kickstarted, actually *supplies* energy to the system.
In summary, motivation is trying to feel like doing stuff. Discipline is doing it even if you don’t feel like it. You get to feel good *afterwards*. Discipline, in short, is a system, whereas motivation is analogous to goals. There is a symmetry. Discipline is more or less self-perpetuating and constant, whereas motivation is a bursty kind of thing.
If a colour is connotative it means viewers associate some meaning of their own with it. This might be personal, but it is also often cultural. Lots of people from the same culture might associate that colour with that meaning. For example, red is associated with “danger”. If a colour is denotive it means the designer associated the colour with a meaning. For example, blue could be associated with “interactive”. Note that a colour can often be both connotative and denotive. As a designer you’re in control of the denotive associations, but not the connotative associations. You should make sure they don’t clash. For example, if red is associated with both “danger” and “interactive”, people might think it’s dangerous to use interactive elements.
Let’s call this “The Collector’s Fallacy”. Why fallacy? Because ‘to know about something’ isn’t the same as ‘knowing something’. Just *knowing about* a thing is less than superficial since knowing about is merely to be certain of its existence, nothing more. Ultimately, this fake-knowledge is hindering us on our road to true excellence. Until we merge the *contents,* the information, ideas, and thoughts of other people into our own knowledge, we haven’t really learned a thing. We don’t change ourselves if we don’t learn, so merely filing things away doesn’t lead us anywhere.
Just like photocopying is self-rewarding and addictive, I argue that we fall into the same trap of false comfort when we bookmark web pages and sort the bookmarks into folders or tagged categories. Bookmarking a web page is satisfying because we get rid of the fear of losing access to the information.
Searching on a topic in your archive is like firing a shotgun into the woods and hoping that there will be food on the table somehow. I need a sniper rifle, night vision goggles, and infrared satellite pictures as if I have cheated the hell out of Counter-Strike. (I never did by the way.) There is some sneaky, precious game out there. The tags for objects are much more precise and reveal real connections. They narrow down the search way more which is hugely important if your archive grows. They only give you what you want, and not the topic which also contains what you want.
But I also believe taste is something we can and should try to cultivate. Not because taste itself is a virtue, per se, but because I’ve found a taste-filled life to be a richer one. To pursue it is to appreciate ourselves, each other, and the stuff we’re surrounded by a whole lot more.
taste is a mode. It’s a manner of interpretation, expression, or action. Things don’t feel tasteful, they demonstrate taste.
While taste is often focused on a single thing, it is often formed through the integration of diverse, and wide-ranging inputs. Steve Jobs has said, “I think part of what made the Macintosh great was that the people working on it were musicians and poets and artists and zoologists and historians who also happened to be the best computer scientists in the world.”
writing in lowercase, therefore, feels like a blatant middle finger to those decades of conditioning. it's like saying "*i refuse to be a cog in your machine anymore. i'm my own person, and i don't need your approval.*" and just like that, the chorus of old voices in your head—the teachers, parents, bosses, etc—loses its authority. you now have permission to express yourself fully. and boy is it cathartic.
My nutshell review of the experience of using a Pixel 4 can be summed up [just by looking at this screenshot of my home screen](https://daringfireball.net/misc/2020/07/pixel-4.png) (or [this one](https://daringfireball.net/misc/2020/07/pixel-4-alt.png) with an alternate system font). “Instagr…”, “Podcas…”, “Play St…” — that’s how the home screen displays the names for Google’s own apps and Instagram, one of the most-used apps in the world. Now imagine that the same lack of attention to detail that would lead to a home screen like this, where even Google’s own essential apps like Play Store have their names truncated hamfistedly, is applied to every aspect of the entire system. That’s what using Android on a Pixel 4 is like.