
Read this post with annotations!
I learned about rationalism for the first time last Friday, on day 1 of a three-day, conference-style retreat for rationalists.
This post is sortaaa a(n anthological) sequel to this one, in that after publishing that post, I frantically emailed it to a handful of folks, one of whom recommended that I volunteer for said conference/retreat/event. I checked out the website, loosely recognized a couple names on the blogroll, vaguely knew of the Less Wrong community, and said: ¯\_(ツ)_/¯.
Recently I've been feeling like my worldview has become increasingly sheltered. I field my opinions on things I read about, watch, or listen to through people I defer to for commentary. I avoid certain media sources and platforms because my values-aligned peers don't respect them. I'll go sky-diving in a heartbeat, but listen to a Joe Rogan podcast? Yuck!
The reason we/I care so much about building a search engine for collaboration (disclaimer: our website's outdated) is because we want to engineer serendipity between people and knowledge. I've been on an "engineering serendipity" kick for some time; I know it's a term that's come in and out of vogue over the years, but today it feels more relevant than ever. In a post-truth society, being able to surface the right answers to the questions you're asking is increasingly about finding the right people to dynamically respond to them — rather than the right resources to statically consume.
And in a world so perennially divided on socioeconomic values, and also, all other topics, the "right" people to respond are most likely the ones who share as little of your knowledge base as possible: people who come at the same questions from an entirely different angle.
That's been my running theory, at least: my secret motivation behind everything we're building is that if we can understand how far apart we are from each other online, then we can bring the very edges of our social-knowledge networks closer together. A more intimate internet, so to speak.
All that to say, I jumped at the chance to be around a group of people with whom I knew nothing in common.
What I figured we did have in common were conversational pattern languages.
What I've noticed is that it's a lot easier to learn new things when a conversational "dialect" is familiar to you. If you're another international school kid, no matter where in the world, I know exactly how to code switch into conversation with you. Understanding how memetic primitives are shaped within a subculture helps you pick up on dialectical deviations from a colloquial norm.
Simpler: if you understand the way a group of people talk, then it's easier for you to pick up on the nuances of what they're talking about.
And once you understand what they're talking about, it's a lot easier to find common ground, recontextualize yourself within a foreign cultural container, and step out of your filter bubbles.
Online, subcultures and their contextual "dialects" depend on how content gets distributed across different social platforms. Different content formats — short-form vids on TikTok; longer-form on Youtube — are built around different platform features, meaning that conversation styles are influenced by what reaction mechanisms, non-linear editing functions, and reputational metrics a platform prioritizes.
Simpler: the online analog of "language is shaped by culture" is "conversation is influenced by content format".
All that to say: I knew of Less Wrong as a group of people that engage across Substack and personal blogs (i.e. long-form written content) and figured that given that that's how I prefer to consume content online, we might have something in common, and I'd probably enjoy learning from people within their community.
On Rationalism
On Day 1, someone asked me what the probability of something was, and I asserted "100 percent!" I was promptly schooled; the rationalist community does not engage in statistical absolutes.
I learned two things about Less Wrong:
Rationalism, as defined by the community, is rooted in effective reasoning and Bayes' Theorem: figuring things out through logic & principle, and updating your mental model as you intake new information.
Post-rationalism is a subset of the larger Less Wrong ethos: if rationalism is about getting things as "right" as possible, post-rats are preoccupied by making things as "useful" as possible. I decided I'm a pre-rat: intrigued by rationalist lore, but not yet informed enough to have my own opinion.
On Day 2, I discovered Harry Potter and the Methods of Rationality, a community heritage. HPMOR is a fanfic written by Eliezer, LW’s founding father (I think). I personally find that fiction is the easiest way to start picking up a new topic; it grounds novel information into a contextual narrative that feels easy to follow and is ultimately resolved.
I'd been intimidated by the slog of material I saw online — it's one thing to share a pattern language, but something else entirely to figure out where to start when you don't have the context necessary to understand the content. Fiction to me feels like a safe space for initiation; to pick up new vocabulary without the pressure of applying it.
Two more things I learned:
Most people (if not everyone?) in Less Wrong are also big on AI discourse. People are split on POV — some are logically prepping for existential doom; others are rationally optimistic about a superintelligent future.
What is in alignment (for the most part) is the notion that EA (effective altruism) is the most effective and logical path forward, towards socio-technological advancement. Also, e/acc sucks.
On Day 3, I started having conversations.
Up until now, my interactions with people had mostly been didactic: me asking questions that prompted pedagogical answers vs a dynamic back-and-forth. On Sunday, I got more comfortable being part of group convos.
The best way I can summarize what I learned is through a product insight: even though we consume the same type of content (with relative overlap in context), how we consume that content is absolutely not the same.
I am, for lack of a better term, very ~easy breezy~ with my content consumption. I'm not detail oriented; I skim many rabbitholes at once; I pick up on the gist of many things and miss the nuances of many more.
Almost by definition, rationalists (present and post) are thorough in their reading. People seem to value depth over breadth, but because the community attracts so many different "genres" of people, the collective breadth of knowledge across the social container is extensive.
That means that conversations seem to be about grounding in shared understanding. This largely works in favour of my hypothesis that when you're mutually fluent in a conversational language, exchanging knowledge is significantly easier; everything from IFS to astrophysics is fair game, and people really listen when you speak (and vice versa).
What I saw was that people really take their time exploring many sides of a proposition. I'm more used to ad-libbing in dialogue; yes, and-ing the subject at hand; adding more fuel to the conversational fire. But here, people spent their time turning over each and every conversational log; not adding fuel but optimizing for surface area. A conversation is over when the topic of discussion has been fittingly spent, no sooner or later.
This translates, too, to the Less Wrong interface. Something really cool that Ben & co just shipped in their community forum is a chat UI for dialogues. This is different than talking in a Reddit-style thread: it's more like a privately-public DM.
Here's an example — basically, up to five people in a group chat get their own chat box, and what you're writing is visible before you hit enter, like Google Docs. Sometimes people are typing for 10 mins, and this visibility means you can start to respond to earlier points, or make a correction before a conversation veers off course. Then, you can publish it as an "artifact" of a live discussion. You'll see that other people can comment asynchronously at the end of a post in a Reddit-style thread, as is the norm on many published posts/essays.
It's fucking cool! But also, it's not a feature that would cross my mind, given my own conversational habits online — most meaningful conversations I have online are asynchronous, not live.
Another cool thing: apparently, the Harry Potter fanfic was written iteratively, and serialized like a newsletter (through RSS feed subscriptions). Part of the lore bequested to me is that mid-way through the story, Eliezer reasoned himself into a corner he couldn't write the characters out of, and told his readers that they were responsible for keeping Harry alive because he couldn't figure out how to close out the narrative. That's so fun!
This is interesting to me because during the last WOP cohort,
gave a talk about network effects on Substack. She'd been looking to sign with a publisher for her gothic novel, but found that book publishing is as much of a status game as, well, everything else — publishers want to see audience numbers behind an author before they sign off on a deal. She launched a Substack instead, where she could a) grow her audience, and b) serialize her book in tandem.This is not so different from what Eliezer turned HPMOR into back in the 2010s, during the golden age of fanfic! And while the outputs of serialized publishing might look the same between episodic fanfics and iterative newsletters, the difference is that monetization is baked into the distribution models of the latter.
But what I think is particularly cool about fanfic, especially in Eliezer's application of it, is that content invites crowdsourced contribution. Consumption has bled into conversation; conversation has seeped into collaboration. Not so much UGC, but creator-led CGC (community generated content).
Elle's written a lot about bringing fiction novelists into the creator economy, using newsletters and Patreon as primary revenue streams — something that a lot of fanfic writers are already establishing. She's also talked about how fiction writers can borrow a lot from non-fiction newsletter models;
As evidenced by this chart by Alexey Guzey, there are plenty of Substack writers who are putting out quality non-fiction content for their followers and monetizing it—earning in the hundreds of thousands of dollars annually, and in some cases millions, just from reader subscriptions! But could fiction do the same? (link)
But, something that fiction platforms continue to outshine non-fiction newsletters on in distribution is content discovery.
A lot of Less Wrong's younger disciples seem to have stumbled into the community through that HP fanfic; fiction was the distribution mechanism that originally engaged these folks in rationalist texts. And as someone who now feels like there's an entire world of knowledge that's been opened up to me from this one weekend alone — starting points to explore; queries to inquire over; people to peruse — it makes me wonder: what does it look like to treat discovery for non-fiction the way that we treat discovery for fiction?
On Digest
I ran into 5 friends at Less Online, across three separate corners of (relatively recent) life: Mars, SF, and
who wasn't just part of the WOP cohort with me, but also on the Zoom with Elle!How much earlier could I have gotten unblocked on Digest, if I had known who in my network to ask for help?
I spoke to Ben a while back about Matter's roots in social discovery. IIRC, the platform had started out with more of a "come for the network, stay for the tool" approach, where "the network" was about discovering content through friends, and found, much like many other social media-esque startups, the cold start effect and activation energy of engaging with new user mechanisms added too much friction to onboarding.
@Harry gave me similar feedback when I reached out to him about Digest:
...at a meta level, my advice is to try to come up with something that does not rely on network effects, can be tested at very small scale, and then just figure out if it can really be "sticky" (people use it and want to use it every day) for even a small group of people.
We actually did try to do that with the single-player recommendation engine we shipped a couple months ago. The simplest analog for this MVP is the Spotify radio feature — you can toggle an individual song on Spotify and generate an entire playlist of content based on that song. With the Reading Widget, when you're reading an essay you like (on Substack, a personal blog, Mirror/Paragraph etc), you can save it directly through the Chrome extension and get recommended cross-platform content based on that individual essay. We do this through a semantic context index, which ranks content similarity based on relevancy, not virality.
While we found that people were conceptually excited about this, we ran into two headwinds:
Chrome extensions are an oversaturated category play
We'd anticipated that in-tab recommendations were a distinct enough value prop that people would be willing to use the Digest extension alongside their current curation tools, all of which also exist in widget form — Pocket, Curius, Sublime, Matter, etc. We figured that curation for us wasn't the competitive differentiator, but rather, the mode of engagement, and therefore, we wouldn't be competing with other tools, but engaging with an entirely different user touchpoint altogether.
The problem was that A) there are already too many tools out there; no one needs to add another one to their workflow, and B) its not a novel user mechanism, it's an existing user behavior that's already cognitively associated with different consumption functions.
Without feature parity, we can't properly test differentiation (in the right market)
We weren't just aiming for a different touchpoint — we were aiming for a specific audience. You seek out specific recs when you're eager for a topical deep dive; your reading behaviours look a lot less like persual and more like research. Reading until the topic of consumption has been appropriately explored, vs loosely skimmed. We were trying to sit with tools like Zotero, and Hypothesis, not Readwise or Sublime.
The problem with this is that, without having minimum viable annotation features — one of the hallmarks of tools supporting the type of reading we were catering to — we were being put in the curation category instead (see above).
The other problem here too, of course, is that content discovery is tired. @Jad's talked a little about why Shelf stopped focusing on it;
Content discovery is not a burning user need. Yes, there are plenty of problems with content discovery, and yes “we don’t know what to watch” sometimes — but we’ve found that it’s simply not a burning human need and there are plenty of channels to discover content today. People aren’t short on new content. We’ve chosen to frame content discovery as an outcome of our experience, not as a need or value proposition for using our product.
We'd heard this too: about 40% of the users we interviewed prior to shipping our MVP said something along the lines of "I already have way too much to read; I don't need to add more to my plate."
We shipped it anyways bc 1) I am a first time founder and obstinate in my opinions (until proven wrong!), and 2) because semantic recommendations were a means to an end; what we were really excited about was moving beyond content discovery to context discovery.
Here's the difference: the former focuses on consumption; the latter caters to conversation.
Curation tools today are largely distillation-based; culling down consumption choice anxiety, and parsing out subjective signal from noise. There are things you can do on top of that, of course: cluster, categorise, discover, compare, mark up, etcetcetc.
Tools for thought also allow you to do a lot of those things, but on top of your own content, plus contextualisation — you curate your personal knowledge graph based on content you've created.
What if we brought personal context to social knowledge graphs, using curation as an input of search, rather than an output of consumption?
Semantic search on personal drafts is how we're thinking about this: by the end of this month, you'll be able to save your unpublished drafts (on Google & Butter) to the widget, and get recommended content based on what you're currently writing about. This means that instead of having to summarize what you're looking for in a one sentence search query (or a longer convo with Claude/Chatty-G), you could smash out a notes dump of everything you know on, say, synthetic biology for your speculative fiction short story on transhumanism, and find relevant content based on the context you have within a given topic area.
We're really excited about this because:
Writers are really excited about this, and if we're aiming for the single-player play, there's a lot more opportunity for stickiness with WIP search than single-player reading recs (based on ux feedback).
Search isn't typically delivered in widget format, the way that curation/thought tools tend to be. That means that we're building a novel user mechanism for a distinct touchpoint, which allows us to differentiate through interface more so than discovery as a core value prop.
Drafting/writing is where context discovery has the most value-add, and invites in the sort of consumption that requires focus and attention...like long-form deep dives. If there is a place to test out discovery as a differentiator, it makes more sense to do so for the writing process, rather than the reading process.
And context discovery on social WIPs really lends itself well to conversation too; sharing personal docs as public drafts allows other people exploring semantically relevant content to "stumble into" your unpublished work when they use the widget. And because a draft is inherently iterative, discovery during the writing process allows you to crowdsource commentary, the same way that fanfic authors like Eliezer have learned to.
There are two questions here, of course:
Do writers want it?
How frequently is this a pain point that writers experience? How many writers want to crowdsource commentary? We heard a lot from people who wanted discovery to be contained to private communities, so that publicity felt intimate, rather than distributed.
Do readers want it?
What's the ratio of consumers who contribute to commentary? How many people who consume long-form written content engage with content the way I do, versus the more analytical, thorough approach that people who take their time on deep dives participate in?
What's the ratio of consumers who curate personal annotations and mark ups on long-form written content? Can we use that as a proxy for consumers who would be willing to engage in this behaviours socially?
The bet we’re making, essentially, is that commentary on long-form written content is a user behaviour we’ll see more often /if/ there were social platform infrastructures that make it easy and intuitive to contribute to conversation through commentary.
On Consumption
What’s important about commentary that it essentially services as crowdsourced peer review. Take a look at Alexey’s critical breakdown of all of the flaws in Matthew Walker’s research, for example.
A social good that Musk has contributed to discourse on X (né Twitter) is the community notes feature. Crowdsourced corrections that users can add to posts, to give context omitted or overlooked by OP. Like fake news, political biases, or unannounced OnlyFans ads.
What’s particularly cool about how community notes work is that, in order for a note to be approved, multiple people who have formerly disagreed on a community note have to be aligned on a pending edit. Decentralized moderation lends more credibility to content on a free speech platform. Wikipedia has that same distributed trust barometer; almost 9,000 moderators can approve or deny pending reviews on Wiki pages.
And it works! Despite the platform's donation-based, volunteer-led model, Wikipedia's emphasis on dynamic, transparent discourse and collaborative content editing are hallmarks of reliability, in contrast with "well-funded platforms like Facebook and YouTube [that] have become the subjects of frenzied debate about misinformation."
Another component here too, is that reliability isn't just rooted in trust in the content and information itself, but also in how the sausage gets made — how the story gets told:
On Wikipedia, there’s only one article about the Stoneman shooting, and it’s created by a group of people discussing and debating the best way to present information in a singular way, suggesting and sometimes voting on changes to a point where enough people are satisfied.
Importantly, that discussion is both entirely transparent, and at the same time “behind the scenes.” The “Talk” pages on which editorial decisions are made are prominently linked to on every entry. Anyone can read, access, and participate — but not many people do. This means both that the story of how an article came to be is made clear to a reader (unlike, say, algorithmic decisions made by Facebook), but also that there is less incentive for a given editor to call attention to themselves in the hopes of becoming a celebrity (unlike, say, the YouTube-star economy).
And yet: trust in the sausage — in the actual contents of a Wikipage — is contingent on verifiability, not validity. A fact is a fact as long as it can be cited; the source of that citation is less important than its inclusion.
In fact, Wikipedia's content policies bar people from autobiographically editing their own Wikipages — it's personal research, a subjective POV, and not citable material. Emily St. John Mandel solicited a totally normal interview in Slate so that her recent divorce could be factually represented on her Wikipage.
Whereas Wikipedia emphasizes objectivity, Less Wrong hinges on validity: what's the logic behind your viewpoint, how did you arrive at your conclusion, etc. And that parallel translates into how transparency is communicated across both platforms. Wikipedia's "Talk" pages make visible the editing process by which content gets represented on a page; Less Wrong's dialogue UI is a peep into your conversational partner's thinking process.
Of course, that's because the consumption intent is different; Less Wrong is about social discourse, rather than objective knowledge, and subjective viewpoints are primary to the conversational format. LW's New User Guide includes this chat DM from the 2010s:
Yo, I just had a crazy experience. I think I saw someone on the internet have a productive conversation.
I was browsing this website (lesswrong.com, from the guy who wrote that Harry Potter fanfiction I've been into), and two people were arguing back and forth about economics, and after like 6 back and forths one of them just said "Ok, you've convinced me, I've changed my mind".
Has this ever happened on the internet before?
– paraphrased and translated chatlog (from german) by Habryka to a friend of his, circa 2013-2014
But the other part of it is that verifiability is a tacit assumption; an unspoken norm within the community. Reliability of content is embedded in the context of the community container; contribution in Less Wrong self-selects for people who imbue their discourse in careful scrutiny and rational examination.
On Outcomes
This has been wordy — thanks for sticking with me :)
Here's kinda where I wanna land with all of this:
I've spent the last few weeks hibernating, integrating feedback from everyone who's been kind enough to share their time. What's been great about this double punch combo of content + immersion is that I feel like I've gotten a very broad strokes scope of what the multiplayer market looks like for a research-driven social platform — who our "creators" are, what social behaviours we might incentivize, which data parameters could go into a social-production graph.
I know that at this stage, the point is to focus on a core target user and appeal to that niche — and that is what we're doing.
But being able zoom out is helping me pattern-match the key pain points and "magic wand" features that crop up across multiple user verticals — the Venn diagram between the wants and needs of social stakeholders.
Primarily:
Readers want more visibility into references & research, because that's crucial to content credibility; writers want casual references to their work to earn attribution — in the same way that academic researchers have h-indexes.
Not all readers and writers are lonely, but an overwhelming majority of both find granular social features exciting. This Tumblr thread gives more context into how fanfic writers & readers engage with social commentary on Wattpad & AO3, for ex.
Some caveats here: larger, more established authors are less likely to want to crowdsource commentary on their WIPs, although ~85% of emerging writers say they're very much eager to. Which makes sense; as your audience scales, conversation becomes noisy, and commentary stops being about social discovery (finding other writers to connect with) and becomes about reader engagement.
That said, larger writers also create avenues to engage readers across their community in conversation with one another — conversation that they themselves may or may not participate in. ACX hosts open comment threads (this one has 690 comments); Elle shares writing prompts (this one has 31 responses); Slime Mold Time Mold ran a contest for mysterious topic reviews.
Research is a user behaviour as much as it is a material output. This is one overlap we see between consumers (readers) and creators (researchers); research is the mechanism by which both parties engage with content.
But products built for the research space focus on research as an input; tools for thought and curation widgets help you compile and contextualize the content you consume.
Writing platforms — like Butter, and Notion, and Google Docs — help you /process/ your research input into output (whether that becomes a storyboard for a Youtube video or a peer reviewed thesis), and build for editing, not conversational context.
Meanwhile, the output of netizen research — informal, systematic investigations into factual queries — are dispersed across social platforms, from livestreams, to blog posts, to interactive reports.
These outputs may be different content formats, and potentially different genres altogether (e.g. historical fiction, statistical analysis), but how the "sausage" gets made stays the same: informal, systematic investigation of critical content.
Earlier, I said this:
Online, subcultures and their contextual "dialects" depend on how content gets distributed across different social platforms. Different content formats — short-form vids on TikTok; longer-form on Youtube — are built around different platform features, meaning that conversation styles are influenced by what reaction mechanisms, non-linear editing functions, and reputational metrics a platform prioritizes.
My key takeaway coming out of this essay is that research is not just a product of consumption/creation: it is itself a contextual dialect. One where content reliability factors in verifiability, validity, and credibility — the editing process, the contextual analysis, and the authority of the speaker themselves.
And while academic researchers have standardized reputation metrics — institutional associations, citations, papers, etc — stringent researchers publishing casual outputs do not have the same attribution channels. We've spoken a lot about how we might grant netizen researchers the same reputation systems as PhD defenders: what's the hyperlink equivalent of a h-index? How do we track cross-platform, cross-medium citations?
But maybe where we could spend more time is thinking through the reaction mechanisms and non-linear editing functions that appeal to research as a conversational language.
To quote myself again: if "conversation is influenced by content format," and we know that research is the contextual dialect we want to engage in, what is the content format for consumption and conversation? What platform features could we be building for to inform research as a user behaviour?
Next time: from Github for social writers and netizen researchers -> Wattpad for research-driven commentary.
PS. If you've made it this far, open invite to request edit access on the working doc and add your own commentary!! That is quite literally the product thesis, pls help me reject the null :-)
As someone currently stuck on the research phase of an essay, I love that you keep pulling on this thread of building the shared tools + languages for the research we do outside of an academic context. Mine is going to sit in Google Docs unused, and the "cutting room floor" has plenty of useful + interesting stuff that won't make the final piece.
What a small world. I just finished re-reading that HP fanfic!
Glad to hear you're feeling less lost with your product now!