Similarities between communities involved with AGI and UAP
Role of Skeptics, Doomers, Optimists, and Money in AI and UFO circles.
There are certain parallels among the communities that engage with AGI and those involved in the UAP realm. While these are, of course, broad generalizations that overlook many nuances, it’s still an interesting thought exercise to compare their differing perspectives.
Skeptics
In the AGI world, there are folks like Gary Marcus who stubbornly cling on to their world view despite repeated evidence to the contrary. He has been calling for deep learning hitting a wall since 2022, even as a few months after chatgpt is released and within a year GPT-4 came out. In one tweet, he is mocking the idea that the current system can ever lead to AGI, while in another he seems to be quoting the godfather of deep learning who is concerned that we are way too close to AGI. He flip flops between claims that current AI system is so flawed it can never reach AGI to signing petitions arguing we need to pause AI development because “contemporary AI systems are now becoming human-competitive at general tasks” that it poses existential risk. At this point I’m not even sure what his arguments are other than just opposing everything the tech industry does.
Meanwhile, in the UFO world there are skeptics like Mick West whose overreliance on their priors blind them to what’s actually going on. His arguments against UFO are less about finding out about the truth and more about debunking claims that don’t fit his existing world view. The arguments assume that the claims are not true, and then tries to find off the shelf explanations while dismissing any data that doesn’t fit as unreliable. Their stance is that of ignoring everything until there’s “hard data” to prove it. This approach works great when studying parts of nature which are easily measurable and relatively cooperative to our attempts to understand it. However, in an adversarial setting where information could be withheld/manipulated, for example by either the military or UAP themselves, this approach has limitations. We need actual critical thinking about the particular set of circumstances to understand the reality of the situation, not this pseudo science where you just give up and bury your head in the sand of your existing beliefs.
When trying to build something brand new at the frontier of human knowledge, like creating artificial intelligence, there are going to be a lot of wrong turns and unknowns. No one knows what the right path is, not Sam Altman of OpenAI or Demis Hassabis from Google Deepmind and certainly not Gary Marcus. Similarly if you are studying something like UFO there is going to be a lot of noise and possibly deception. It is easy, fashionable even, to be a skeptic when 99.9% or more of the time you are right and the odds are stacked towards your favor. But that doesn’t mean you are right about this.
It is kind of funny that one of the chief complaints from AI skeptics is that today’s model just extrapolates the data it has seen and doesn’t generalize to novel situations. Meanwhile, the human UFO skeptics are stuck unable to think beyond the prior of their own day to day experience, and actually analyze situations that deviate too far from their beliefs.
We need people who look at the data points critically, and push back on those making wild claims or are too eager to believe every story on the internet. But, skeptics like Gary Marcus and Mick West are more like reflexive reactions from particular worldviews that are under challenge.
Doomers
In both the AGI and UAP world there are those who I call “doomers”. These are folks who are deeply pessimistic about human nature, the ability of society to adapt, and our prospects of co-existing with beings smarter than us. Mix that with a bit of a savior complex, and you have a group that thinks they know better than everyone else about how things will turn out, and believe they must protect the rest of us from some hypothetical future.
The AI doomers are fearful of what people would do with powerful technology and what the same technology might do to us. Folks like Elizier Yudkowsky argue that building AGI is crossing the rubicon in which we will all die. While those at the future of life institute argue we should pause development of AI until we somehow figure out how to “control” it, whatever that means. They come up with elaborate thought exercises that assume the worst case for everything. While I sympathize with these arguments, one can come up with these worst cases scenarios for every little thing in life. And yet, part of the absurdity of life is the routine neglect of the long tail.
In the UFO world, the doomers are the gatekeepers of the legacy ufo program or folks like the Collins Elite. They generally have national security backgrounds, which colors their thinking, and are used to working in the shadows to do whatever they think is necessary to protect us. Their belief seems to be that the reality of UAP will be too much of an ontological shock, and must be withheld from the public. They think the existence of aliens or NHI technology will somehow cause society to break down and stop functioning, therefore they alone must hold the burden of this knowledge and use whatever means necessary to prevent its release.
The doomers are like the skeptics in that they both cling on to the fantasy that somehow humans are the apex intelligence in this realm. The difference is that doomers actually take action to protect the perceived status quo rather than just ignoring any clues that they disagree with. However, their efforts are ultimately futile because they are going up against forces much more powerful than they are. For AI doomers, they are fighting the competitive nature of our society that seeks to gain any advantage possible. They also assume superintelligence (intelligence greater than all humans combined) don’t already exist, which is likely false. Meanwhile UAP doomers delude themselves into believing they can protect society from this knowledge. By all accounts, the UAPs seem to be more advanced technologically and have their own plans/goals. The idea that a few men can somehow hide the existence of UAP’s, which “is far superior to anything that we had at the time, have today, or are looking to develop in the next 10+ years”, indefinitely seems rather silly.
In general, the doomers probably are right to some degree though and should be taken seriously. But we shouldn’t assume that they know it all and can be trusted with knowing what’s best for humanity. It is not for them alone to decide.
Optimists
This group is, as the name suggests, the opposite of the doomers. There’s still the same savior complex, but in this instance, they trust that society and humanity are capable of adapting. They also acknowledge the impermanence of our world and embrace the necessity of change.
The true believers of AGI like Dario Amodei of Anthropic and Sam Altman of OpenAI write of an “intelligence age” in which “machines of loving grace” transform the world for the better. While it’s refreshing to have positive visions of how AI could benefit the world, how much of this is blind optimism as opposed to actual roadmap we can enact. The assumption is that more intelligence is better and that once we build AGI we (or rather, someone else) will figure out how to leverage it “for good”.
But is intelligence the right trait to scale for humanity? Can more intelligence help us transcend our instinct for self-interest, or will it simply accelerate the existing race dynamics in this civilization predicated on competition. Will it somehow help us reach some game theoretic state in which every actor acting hyper rationally leads to more stability in the system? Intuitively, it feels like dramatically increasing intelligence alone without a corresponding scale up of compassion or cooperation will end up being highly destabilizing.
Meanwhile, the disclosure folks (ex: Gary Nolan, Karl Nell, and Danny Sheehan) believe the government should stop hiding information about UAP. In principle, this obviously makes sense. We are probably much better off with more transparency from our government and ultimately it should be working for us, not the other way around. But what do the people pushing for disclosure think the government knows and what does the government actually know? Do they actually have first hand knowledge of the information they want the government to disclose, and how complete is this information they think they know? There’s always the possibility they were fed deliberate misinformation, or that what they know is only partial and that there’s some truth to the beliefs the doomers hold.
Both AI and UAP disclosure optimists also believe that there are powerful technologies we can build or reverse engineer to distribute to all of humanity. For AGI folks there’s the belief that intelligence is supreme and once we reverse engineer intelligence itself we can apply it to solve every other problem. For the UAP crowd, it’s more about reverse engineering specific technologies retrieved from these non-human intelligences for use in energy, medicine, propulsion and more. They both recognize the potential dangers that might arise if these technologies are not developed appropriately, but are ultimately optimistic about humanity's ability to adapt and believe in the arc of the moral universe.
Unfortunately, reality is messy and things almost never work out as we imagined. Even if we get well planned controlled disclosure, it’s a tight rope to only reveal the “right” information without other aspects of the phenomenon and secrets becoming public. Having scalable intelligence is a necessary, but not sufficient, tool in humanity’s toolbox to help us survive in this environment that is constantly in flux. It’s essential to carefully examine our fantasies, separating what’s merely delusional from what’s practical optimism, so we don’t lose sight of the effort required to make our visions a reality.
Investors/Corporations
You need money to live in this society of ours. Presumably, to reverse engineer any UAP technology and understand the phenomenon you would need some large sum of capital. To build AGI, it certainly seems like one would also need A LOT of money. Therefore, it is inescapable to deal with some influence from money.
Based on current technology and trends, it is unlikely some lone actor in their basement can just code up an AGI. (Although if anyone can do it, John Carmack and Rich Sutton probably can at Keen). It takes massive amounts of compute and data to train frontier models, which means the idealistic spirits from science/research must inevitably meet the reality of capitalism. Microsoft didn’t invest $14 billion into OpenAI because of its mission, they did it to make Google dance. In fact under the current terms, it technically loses its investments if OpenAI actually succeeds in its goal of building AGI. The incentives are not perfectly aligned here, although it is difficult to see what alternatives are. Relying on another tech giant in Meta to keep its word in “open sourcing” the weights of its models doesn’t sound like a long term strategy, and running a think tank without a frontier model/any leverage in the ecosystem doesn’t seem practical.
At this year’s Sol Conference, there was a whole section on investments into UAP. Rizwan Virk had a nice talk about the path towards building up a startup and investment ecosystem around UAP. This actually makes a lot of sense in two ways. First, there’s no better way to convince the government to take certain action than to get big money and the donor class behind it. If those with power and money see investment opportunities in disclosure, they will lobby and leverage their influence to get these technologies out, for better or worse. Second, in theory, the most effective way to reverse-engineer UAP technology and ensure it benefits the world is to foster a robust ecosystem of companies and markets. Such an environment can attract top-tier talent and resources from across the world, ultimately creating the high-quality products and innovations we need to solve our most pressing challenges.
However, if the US government eventually begins UAP disclosure and grants the private sector access to these technologies, how will it unfold? Will it be done in a way to foster a broad, thriving ecosystem of innovators, or will the spoils accrue to those with close ties to whichever administration is in power at the time? And for those founders and investors who get lucky, what incentives will they have to refrain from abusing this privilege to create the next generation of monopolies?
AGI and UAP both involve technologies that are extremely powerful and have implications in almost every aspect of civilization. But investors care only about one thing, and that is making a return on their money. They have an uncanny ability to ignore everything else and focus on increasing an arbitrary number no matter how ridiculous it is. I once saw a report recommending investment strategies in the event of a nuclear war. Many of the challenges facing our government today are rooted in the influence of money on politics. Over time, financial interests have consistently subjugated existing institutions, bending them to serve their own ends. The question now is whether we can harness the benefits of markets and money without ultimately being consumed by their incentives.
Final Remarks
Again, this is meant as a fun comparison of some similarities I’ve noticed. It is far from being comprehensive, but I am curious what everyone else thinks. What are the things I missed, and which areas can be expanded upon.