BLOG

Artificial Intelligence Has Arrived. Does This Mean Humans Are Done?
Artificial Intelligence (AI) is all the rage. You can’t read a headline or turn on the news without hearing about how machines with vast computing power, access to billions of books and documents, the ability to teach themselves, and blinding processing speeds are now poised to take over the world.
This article by prominent historian Niall Ferguson offers a glimpse of the dark side of AI. Experimenters now envision machines taking on a life of their own and attacking humans and civilization.
It’s important to remember that if the machine goes berserk, you can just pull the plug. Apologists for AI capacity claim that pulling the plug won’t work because the machine will anticipate that strategy and “export” itself to another machine in a catch-me-if-you-can scenario where disabling one location won’t stop the code and algorithms from popping up elsewhere and continuing its attack.
Maybe. But there are all kinds of logistical problems with this, including the availability of enough machines with the processing power needed, the fact that alternate machines are likely to be surrounded by firewalls and digital moats, and a host of configuration and interoperability issues. We need to understand these constraints, but for now, just pull the plug.
In fact, there are a number of safeguards being proposed to limit the potential damage of AI while still gaining enormous benefits. These include transparency (so that third parties can identify flaws), oversight, a weakened form of adversarial training (so the machine can solve problems without plotting against us in its spare time), approval-based modification (the machine has to “ask permission” before activating autonomous machine learning), recursive reward modeling (the machine only moves in certain directions where it gets a “pat on the head” from humans), and other similar tools.
Of course, none of these safeguards work if the power behind AI is malignant and actually wants to destroy mankind. This would be like putting atomic weapons in the hands of a desperate Adolf Hitler. We know what would have happened next.
The solution in that case would be more political, forensic, and defense-oriented. Intelligence gathering would play a huge role. Of course, that evolves quickly into a machine-versus-machine intelligence war of collection and deception. Imagine James Bond with a hyper-computer instead of a Walther PPK.
The latest developments in AI and GPT (generative pre-trained transformers) put us squarely in a brave new world. Investors need to be careful about relying on GPT systems for financial advice, despite their enormous processing power.
The output is never better than the inputs and the market inputs are littered with bad models, false assumptions, poor forecasting records, and biases. Once the bad actors start populating the literature with misleading information and developer biases infiltrate the code, it’s not clear that AI will be much better than your gut instincts.
Corporate leaders and institutional fiduciaries looking to incorporate state of the art predictive analytics to their risk mitigation and strategic analysis should click the link to learn more about Raven Predictive Analytics®.
OUR MISSION
Raven Predictive Analytics®, a patent-pending enterprise software as a service (SaaS), disrupts existing predictive analytics by more accurately modeling capital markets using complex systems, augmented intelligence, and team science.
Presented in a streamlined and personalized data center, Raven Predictive Analytics®; will revolutionize the way corporate risk managers and institutional investors read the market.