Semi-automation

Posted on 2025-05-04 by Dmitri Zdorov

Semi-automation

We crave super-smart assistants and even potentially human-killing superintelligences because they promise productivity boosts. But high productivity and hyper-efficiency sometimes ruin everything. Those cases are rare, but they do exist.

Romantic atmospheres, for example, often conflict with optimization. And I don’t just mean romantic as in intimacy — I mean those times you stumble into a cozy bar with live music, or someone’s preparing your meal slowly at a famous spot. In those moments, you want to slow down. You want things to be inefficient, and definitely not automated.

As technology, robotics, and automation spread deeper into every corner of life, we’ll start to value the non-automated more and more. Sure, there will be plenty of fake “handmade-style” experiences, but in a world where basic needs are solved, there could be a massive renaissance of what’s real — or whatever will replace the word authentic. A return to warmth and imperfection, on levels we can’t really imagine yet — just like we can’t quite imagine a world of abundant generosity with no need to work.

Lem, in Solaris, also touched on the importance of subjectivity — that as humans, our own context still matters deeply. Consciousness is, in many ways, how we feel, which makes it ultra-subjective. Then again, who knows? Maybe machines will get so good at mimicking that kind of subjectivity, we won’t be able to tell the difference. That idea, of course, was most beautifully explored in the original Blade Runner — where the line between real and fake blurs so much that the only thing left is the feeling of realness. And once we reach that point… maybe automation won’t ruin anything anymore. Or maybe it’ll ruin everything.

Our Cognitive Tendency to Simplify

Posted on 2025-05-03 by Dmitri Zdorov

Dimka's Apple Watch with complications

Absolutely all complex processes or global changes always have many causes and many influencing factors. It's never the case that it's just one thing. Even two or three, five, etc., many, very many and in a perfect soup of different angles and proportions.

And this is something like all reasonable people understand. But since from an evolutionary perspective we were just monkeys in the jungle not too long ago, we have a strong proclivity to explain everything to ourselves and our neighbors very simply: everything always has a main cause, which consequently leads to one specific result. And as it turns out, this predisposition of ours to explain everything simply continues to only be reinforced by culture and even education, even science.

Moreover, success in society is often the result of focus and concentration on one thing. Not always, but almost always. This also reinforces the feeling that this one thing can radically change something big and mega-complex. Therefore, it's simply necessary for us to constantly remind ourselves about the complexity of everything happening. However, this already begins to push us to another extreme - attempts to find the insidious plans of conspirators, and conspiracy theories on this basis grow like mushrooms after warm rain. As they say, the situation is complex, and we just need to remember this and not go to extremes, although we're not very good at it.

Cognitive psychologists and scientists, especially Daniel Kahneman and Amos Tversky - back in the 70s of the last century in Israel, and then in America, dug up this whole topic and called it "heuristics and biases." Heuristics are these mental shortcuts that our brain takes to avoid dealing with all the complexity that constantly weighs on us. Like, sort of instinctive reality simplifiers. And they conducted a mass of experiments where they showed how even the smartest people - professors, mathematicians, doctors of science - still fall into these simplification traps. For these discoveries, Kahneman even received the Nobel Prize in Economics in 2002 (Tversky unfortunately didn't live to see it, otherwise he would have probably received it too). They showed that all of us, even the most educated, often operate on autopilot, which is evolutionarily designed for survival in the savanna, not for understanding inflation or the causes of wars. And what's funny is that awareness of this error doesn't protect against it - it's built into the very construction of our brain, like some bug that has already become a feature. But when we suddenly have an epiphany about this, some not-bad findings emerge.

Fluid and Crystallized Intelligence

Posted on 2025-04-28 by Dmitri Zdorov

Dimka's Apple Watch with complications

In the mid-20th century, sciences studying humans — psychology, neurology, and many others — started booming. How does the brain work? What is memory? What is consciousness? How do these things interact and develop inside us — and most importantly, how can we make them better? There was a huge explosion of new theories and concepts. One of them, in 1943, was proposed by Raymond Cattell. He was working on differential psychology — personality traits, abilities, motivation — and came up with the idea that we actually have two kinds of intelligence: fluid and crystallized (Theory of Fluid and Crystallized Intelligence).

Fluid intelligence is what helps us solve new, abstract problems, when there’s no ready-made knowledge to fall back on. Crystallized intelligence leans heavily on what we already know and have experienced. We all have both, and we use both all the time — just in different proportions depending on the situation.

They say fluid intelligence peaks around 27 years old, while crystallized peaks somewhere around 50–60, and starts to decline closer to 70. But it’s also strongly linked to mental load. Like in sports: if you keep challenging yourself, you stay in better shape. Even though some experiments showed that memory exercises don’t really boost fluid intelligence, it varies a lot from person to person — and especially among people on the autism spectrum.

They did experiments where people could either solve tasks intuitively, inventing their own methods, or by leaning on learned skills. The common belief was that young people invent more, and experienced people use what they know. But honestly, I think that’s a bit forced. Different personalities matter more: some people will "do it right" since childhood, and others will be reinventing their own bicycles till the end of their days.

This whole topic is resurfacing now because many top tech companies have lost their founders — and the big, seasoned managers have taken over. The problem is, breakthrough innovation needs fluid intelligence, which younger people usually have more of. But in corporate environments, not many experienced adults are thrilled about being told what to do by some young, relatively inexperienced — even if pretty sharp — guy. It’s tolerated when it’s a charismatic founder leading the charge. But when the founder is gone — dead, stepped down, or just drifted off — the leadership falls to the seasoned veterans, and it’s hard to put them under some untested newcomer.

But again — I don’t fully buy this age-based thinking. It’s not just about age. It’s also very much about profession. Managers think differently from product designers. Even product managers think differently. What these companies really need is not just "younger and fresher" blood — but people across different levels of experience, all focused on building great products, not just improving margins. Sure, companies need everyone — not just product designers. But the focus of leadership really defines where the company goes.

Apple Watch at 10

Posted on 2025-04-26 by Dmitri Zdorov

Dimka's Apple Watch with complications

I got my first Apple Watch on day one when they launched 10 years ago, and I've been quite happy with the product ever since. Since then, many models have been released, and I don't have the latest one, but I'm not chasing after the newest technology here. What I have satisfies me very well; I use it every day and am very pleased with it.

I have many custom-configured watch faces, but I typically use Infograph (I have several of these too, and change them depending on my mood) with lots of complications. I've got many different bands, but most often wear the yellow Nomad Sports Band. The main thing the watch gives me is that it allows me to use my other devices less, particularly my iPhone. I receive notifications on the watch, and it's convenient that they can be configured more precisely than on the phone. I use it to set timers, check the outdoor temperature, control lights and other smart home devices. I can use it as a remote for Apple TV which we use with our projector, record voice memos, and see what time it is in other cities. The watch lets me view TFA time codes, helps confirm purchases, or confirms my identity instead of passwords, etc.

It's very unfortunate that Siri is in such a dismal, if not to say crappy, state. I don't even think Apple needs to fix this. I just want them to allow me to choose one of the models on the market and decide which aspects of my semantic index I'll allow each agent to see.

I also don't use the cellular function because GoogleFi, which I use as my main mobile carrier, still doesn't support (I think deliberately) Apple Watch.

Here are my Apple Watch faces: the busy Infograph 1, Infograph 2, Utility with Camera, and Simple Bare minimalist.

Superintelligence in Predictions

Posted on 2025-04-22 by Dmitri Zdorov

supper intelligence is taking over

There are two high-profile publications attempting to predict how artificial intelligence will develop in the near future:

AI-2027 and The Era of Experience Paper

Both are worth reading, or at least asking your favorite LLM to summarize and then explain the important points.

AI 2027 is a scenario written by several researchers about how AI development might unfold in the coming years, and how it all ends, which they then asked a popular blogger to rewrite in a semi-fictional storytelling format for easier consumption. To put it briefly, it all inevitably ends badly. Following abundant criticism, they split the ending into two versions: a gloomier one and one offering slightly unjustified hope. 2027 is just a convention; it maps out developments roughly until 2035.

Much of what they write may indeed unfold that way, but my opinion is that the world is more complex, and all these complexities and nuances matter, which is why things will ultimately turn out differently. Again, we don't know, and I'm not attempting to predict while they did, so it's easier to criticize. Here's what I agree with — there are reasons for concern, but on the other hand, progress is irreversible, we won't stop it, and we shouldn't try to slow it down but rather prepare for positive scenarios and work on increasing their probability. And if superintelligence can destroy us all — well, it will, and we'll be powerless. Write diaries and blogs; they'll be used as sorse for your virtual copies, your future desendants, so to speak. If we're lucky.

The Era of Experience is about something different. It says there are two very different approaches to AI development: RL (reinforcement learning) and LLM (large language models), and describes the difference. The authors describe a new era in artificial intelligence development, characterized by agents with superhuman abilities, learning primarily through their own experience rather than through analysis of human data.

I think they deliberately exaggerate the difference between these approaches because both directions abundantly borrow tools and approaches from each other, and although there is certainly a difference, one of the main insidious undertones is to show that their approach, RL, is better and more correct, and thereby increase their importance and secure more funding. Understandable why. But it's still interesting, of course.

My general conclusion from all this is that predictors significantly simplify the specifically non-human complexity of how all this develops, and a simplified understanding at a qualitative level will yield different results in the future. That is, everything will be different. And not just "hard to say how," but specifically impossible because the difference will be so great that we currently lack the means in language, culture, and understanding of the world to comprehend it, let alone describe it.

Both publications are less about the technical development of AI and more about socio-philosophical anxieties. As always, to properly handle great complexities, we must have order at home, and unfortatelly there's little order now and it's diminishing. You can look at this however you want – as a mental virus the planet uses to protect itself from parasites, or as human nature, or as anything else. But my own shtick about all this is that we need to try harder to understand the world more deeply and broadly, and this will help, and then we just watch how history unfolds. My allegory is that the universe isn't expanding; it remains as it was, but the level of detail that we perceive increases, which we experience as the expansion of space over time. And AI will still go through LSM (Large Scientific Models – a concept where models would be trained on scientific data rather than just text), where both RL and LLM are components.

Daily logos

I started writing a blog on this site in 1999. It was called Dimka Daily. These days many of my updates go to various social media platforms and to the /blog here at this site, called just Blog. I left Daily as archive for posterity.