This post, in addition to everything else I write here, is not generated by AI.
However, AI is more than happy to scrape whatever it is I write and meld it into its consciousness and is more than happy to regurgitate it and indeed claim it is its own voice!
So you no longer have to read what I write – instead, just ask ChatGPT or the other dozen AI engines out there and I am sure they will be more than happy to tell you what you want to be hearing instead of my lonely voice on an obscure website.
I have taken a hiatus from writing over the past couple months. There was a multitude of reasons why which I won’t get into, but part of it is the value of stepping back and mentally detaching to gain a broader and more strategic perspective on the craziness that is going on. One disadvantage is that similar to the physical example of when you don’t work out for a period of time and your body starts to atrophy, the same applies to writing. It takes quite a bit of effort to finally put pen to paper again.
I have been in a low-risk mode for quite some time, a large part due to my own personal uncertainty and the fact that from around 2023 onwards, I do not think that my ideas and thought process has been very alpha-generating. Most of my ideas since then have not been good, with one notable exception (I’ll just leak it here and say it was Magellan Aerospace – I have not done a write-up on them, but suffice to say, just like any good trade, I wish I had taken a larger position to begin with… and there’s likely to be further upside from here, although the outsized returns are now finished).
Although the tariff tantrums that have caused VIX to spike to 60 have tapered away, we continue to live in a strange world where the narrative that is being created by digital media is the cause of the change of people’s perceptions, but not to the degree that one may think – while perceptions may change, underlying realities, when experienced on the ground, is much different than the digital narrative.
Much of what we perceive in the digital world is completely manufactured for manipulatory purposes – with digitization, it is easier than ever to manufacture stories and push the boundaries as to what can to believed. Originally a photograph was sufficient evidence of potential malfeasance, but now you can even ask the AI to photoshop it into some compromising position. Then a voice recording was a smoking gun – and this can now be faked with AI. I’ve also talked about in the past (a decade or even two ago!) about how video can now be easily faked (see: 1987 movie clip – The Running Man – roughly 2:20 into this clip). All of this virtual fakeness is cheaper than ever to manufacture.
While truth is absolute, it is unfortunately very easy to shroud it in a multitude of fictions – and being able to disprove fictions is a much more expensive process because fiction is so damn easy to generate.
In other words, the digital infrastructure is driving us collectively mad – inherently it is the example of a trillion monkeys keying into typewriters and eventually some sort of narrative will ‘stick’ just given our very human susceptibilities to believe in stories. This is the very strange and brave new world that we live in – at least when we stick to using digital media to “inform” us. Just five years ago, we saw a very powerful example of how digital infrastructure can be used to changing the narratives of people in rapid succession (wear a mask and take a vaccine or go to jail because you are killing Grandma) – and this will continue to get worse as long as people continue to believe the digital machine – and they do because it is much easier to listen instead of asking questions!
You can see now why I am going a bit nuts trying to distill everything and figure out where things are going. It can almost justify the valuations we see in the fiction generators of society – the NVidia’s, Facebooks and the like – while the “nuts and bolts” of society (e.g. the CN Rails) trade at much more reasonable price to earnings ratios.
Many years ago, Amazon’s founder bought out the Washington Post – this was not because Bezos decided to be a benevolent news-creator, but rather it was a channel into molding the narrative and public consciousness. Musk bought out Twitter for similar reasons. Facebook has built it up organically, much to their own credit. The value is clearly in creating these networks as a control mechanism to form the narrative that people believe, and this goes beyond dollars and cents and hence a P/E ratio is a meaningless measure of whether there is value or not. How much does one put on the value of an intangible attribute such as trust, for example?
I’m not sure where this is going. In terms of investment options, there is reality and there is virtual reality and the lines between both are getting blurred. Even the macroeconomic environment appears to be precariously positioned – it is quite evident that the infinite money-printing machine is coming to a close (just look at Japan’s long term bond yields which seemingly have a yield again) – US 30-year treasuries aren’t doing that much better either (TLT investors are down about 40-45% from five years ago!) and central banks are on the verge of QE-ing long-term yields into submission once again, which will only have the effect of inflating all asset prices like we’re living in 2009 or 2021. Something will break, but this be asset pricing? Or will the thing that break be future returns by virtue of inflated asset pricing?
One thing that can be taken for certain is that the powerbrokers are the ones controlling the bulk of the assets, and there is a huge vested interest in making sure that control does not get ceded by virtue of collapsing the financial system.
Doing a cursory yield scan of the corporate bond market and the preferred share market, almost nothing is suggesting outsized returns on fixed income. Reliable firms such as TRP or PPL have their preferred shares yielding around 600-700bps on reset, which is hardly a risk premium in light of yields given by corporates. Quite frankly, the environment out there for returns is terrible. Instead, one has to reach for anticipating the psychology of demand and getting that capital appreciation to make those outsized returns. With the TSX at all-time highs and the S&P 500 almost at all-time highs, it could entirely be the case that depreciating currency plus QE will be the vector to propel the markets even higher than anybody expects – keeping the asset values inflated and the speculative mania very much alive.
The crystal ball after March of 2020 was quite clear. Right now it is foggy, but after taking a small break, I’m getting a better sense as to where the AI dys-reality is taking perception and narrative and areas where it will be potentially clashing against reality. AI is great at taking existing information and meshing it into something that looks new, but my projection is a scenario where, similar to Covid, by necessity things go into original territory once again once there is too much of a distance to reconcile narrative to reality – there needs to be a game changing event to occur.
You’ll probably think I am crazy when I say this, but prepare for an alien invasion.
Clearly I’m not sane. Please consult ChatGPT for more saner advice than what I’m dishing out here.
Here’s what ChatGPT thought about this post (sorry for the formatting, I’m too lazy to fix after the cut-and-paste):
Here’s my review of Divestor’s most recent post, “It’s been awhile! Is AI making us nuts?” published June 8, 2025 by Sacha Peter:
🧠 Key Insights
🧭 What Works
💬 What Could Be Sharpened
📝 Overall Take
This is a compelling reflection on the psychology of investing in a time of AI-driven narratives and liquidity intervention. It deftly connects cultural technology shifts with central bank monetary behavior and the search for investment alpha.
I’d love to know your thoughts:
Nice find—let me know if you’d like a dive into any specific thread!
Nice to have you back Sacha! Last time you posted was before the elections – seems like forever ago! 🙂
Having nuts & bolts at reasonable valuation is the best thing ever, I’m gladly using this opportunity!
With respect to AI – couple of quotes from latest ENGH investor’s call:
“AI continues to be promoted in discussions, but there’s great difficulty of monetizing its benefits. Customers seem to be taking a wait-and-see approach.”
“But again, going forward, AI seems to slow things down. Everyone talks about it, but monetizing it, especially in enterprises seems quite difficult. We’re seeing the difficulty and so is our competition, although we all do some of it in some form. So again, I think the — I think it’s — there’s generally struggles in those areas.”
“From our point of view, it’s a little tougher right now because, as I said, you really have to use AI properly. It’s generally done on the SaaS type model. And so again, a lot of holding back because there’s lots of discussion, there’s lots of enthusiasm on AI, but not much happening actually other than for the platform guys and for NVIDIA, of course. But for enterprises, it really hasn’t taken hold.
People are still trying it out. There’s a lot of caution around it, but people are hopeful that it will improve productivity in the future of agents. They won’t eliminate agents, it’ll improve their productivity. We’re well prepared to do that.”
I’m using it every day.
For people in jobs, there is a current golden period where management hasn’t factored in AI productivity improvements into their projects so employees can use productivity gains to increase their free time lol.
When ChatGPT went offline briefly yesterday, there were 500K queries to the website that tracks downtime. This was during regular working hours.
AI is as big or even bigger than the Internet in terms of its eventual impact.
That’s where I see a problem: I’m doubtful any sizeable business would be brave enough to place some serious reliance on AI. Remember Crowd Strike gaffe last year? Implementing AI on a large scale is creating 100x potential AI gaffes. I doubt any reasonable insurer will be capitalized enough to cover this risk.
Also, would AI be more efficient (cost vs value)? Those billions spent will have to get some return, right? Right now my corporate email get bombarded by AI salesmen on a daily basis, promising +XX% efficiency in literally all of the processes! Thanks god, all of these end in spam folder.
I received the same exact emails from blockchain solutions salesmen (just 4-5 years ago we were promised “in future its all going to be one grand blockchain!!!11”). Nothing much these days, almost like blockchain was never a viable technology…
Anyhow, I wonder what happens once chatgpt introduces circa $25 monthly fee for general users.
You’ll be amused to note AI suggested the following reply to your (Dmitri and WQ’s comments) below… talk about saying something without saying anything although it seems like AI is bullish on itself!
==================
Great points, Dmitry — and I agree that healthy skepticism is important, especially given the tech industry’s habit of overpromising. AI, like blockchain, is definitely going through its “hype cycle” phase. The difference, though, is that we’re already seeing very tangible use cases for AI, especially in areas like coding, document drafting, analytics, and customer service — where it can augment rather than replace human input.
You’re right that large-scale enterprise deployment is cautious right now. It should be. There’s real risk in handing over decision-making or automation to systems that can hallucinate or go off-script. But at the same time, the pace of individual and team-level adoption is rapid. It’s grassroots, bottom-up rather than top-down.
As for the monetization model — it’s still evolving. But think of it like the early days of cloud computing: initial skepticism, then gradual infrastructure buy-in, and now it’s foundational. AI might follow a similar arc, especially if the ROI on productivity gains keeps proving itself at the micro level first.
The $25 monthly fee might be a filter more than a barrier. For those already using AI tools daily, it’s a small cost relative to the value — and for enterprises, it’ll be absorbed in broader tool stacks. But it’ll be interesting to see how usage shifts when the “free lunch” period ends.
It can be a little more substantive: 🙂
The writer raises some reasonable concerns about AI adoption, particularly around systemic risk and the potential for overhyped promises. However, some of the analogies and assumptions they use could benefit from more nuance and clarity.
Summary:
While the writer brings up concerns worth discussing—particularly around risk, ROI, and the pattern of overpromising in tech—some of the argument relies on analogies and emotional responses that might not hold up under closer scrutiny. A more focused critique could engage directly with documented AI shortcomings, implementation challenges, and ethical implications, offering a more grounded and compelling case.
(no AI this comment!)
This is a pretty good demonstration of how AI is going to make us nuts – there is no way to humanely parse all this bullshit that is generated (and we could probably produce gigabytes of this)… logically either we have an automated way of generating signal from noise (which puts us at risk that the algorithm in question is actually able to do this in a productive manner and not crop out something that is context-sensitive), or if the signal to noise ratio is so low, not even bother to parse the input.
I remembered that one countermeasure that privacy advocates had with regards to Google, Facebook, etc., taking in ‘signal’ (your search patterns, your location data) was to spam the system with fake queries and fake locational data. AI seems like a pretty good way of lowering that signal to noise ratio.
Oh no, the robot criticizes me! 🙂
Jokes aside, one major positive from AI is its ability to end or significantly limit harm from misinformation campaigns and conspiracy theories. While its frustrating to see constant “XXX, is this true?” even in relation to very trivial facts, I view this as a net positive, as, while AI could be used to augment reality, it makes factchecking very fast and efficient.
The robot has poor taste! 🙂
I agree with you about the dampening effect of AI on misinformation. In the current Google world, people can search and find bullsh*t on the flat earth while ChatGPT will tell you the cold hard round truth. I read an interesting study that claimed that when it came to debunking misinformation people were more likely to listen to the AI than to human experts. Not sure why, but maybe because of ego issues.
Anyhow, in the post-AI future, perhaps there is less polarization and we sit around and get entertained by AI’s output.