Discussion about this post

User's avatar
TC58's avatar

I disagree on your points regarding the likely 'non-problem' that will be (is) AI. Granted AI is no new phenomena, it has existed since Pong (what a game!) and arguably before in various guises.

I also accept that most people are scared of losing control to a machine and that this unto itself is not the largest concern and is, empirically speaking, misplaced fear.

However I think for the more discerning individuals out there, that truly understand the tech-human fusion, the biggest problem is losing control to the humans that control the AI and who will ultimately misuse it.

As power concentrates in the hands of the few (companies, individuals, dynasties etc) we are increasingly at the mercy of those controlling the flow of information. History is being rewritten infront of our eyes. As each technological step forward occurs, its benefits will quickly fade as the technology is co-opted and subsumed into the belly of the larger corporate or big-government beast.

We are doomed not because of Terminator, but becuase of the power controlling a 'Terminator' (automated food dispensing, 'intelligent redistrubution' of our funds etc) and the inherent bias that will exist in the code.

Open Source AI carries far fewer risks to inherent misuse, however I cannot see people putting up with the inconveniences that often come with OS. Most will, lazily but understandably, far rather an integrated and convenient 'it just works' route than something that requires their time and effort to understand - an "Apple of AI" if you will. And that will be the significant issue with AI.

AI will be our downfall, if something else doesn't beat it there first, but it will as ever be human error and not AI that is at the core of our demise.

Expand full comment
2 more comments...

No posts