I disagree on your points regarding the likely 'non-problem' that will be (is) AI. Granted AI is no new phenomena, it has existed since Pong (what a game!) and arguably before in various guises.
I also accept that most people are scared of losing control to a machine and that this unto itself is not the largest concern and is, empirically speaking, misplaced fear.
However I think for the more discerning individuals out there, that truly understand the tech-human fusion, the biggest problem is losing control to the humans that control the AI and who will ultimately misuse it.
As power concentrates in the hands of the few (companies, individuals, dynasties etc) we are increasingly at the mercy of those controlling the flow of information. History is being rewritten infront of our eyes. As each technological step forward occurs, its benefits will quickly fade as the technology is co-opted and subsumed into the belly of the larger corporate or big-government beast.
We are doomed not because of Terminator, but becuase of the power controlling a 'Terminator' (automated food dispensing, 'intelligent redistrubution' of our funds etc) and the inherent bias that will exist in the code.
Open Source AI carries far fewer risks to inherent misuse, however I cannot see people putting up with the inconveniences that often come with OS. Most will, lazily but understandably, far rather an integrated and convenient 'it just works' route than something that requires their time and effort to understand - an "Apple of AI" if you will. And that will be the significant issue with AI.
AI will be our downfall, if something else doesn't beat it there first, but it will as ever be human error and not AI that is at the core of our demise.
Good points. And if that's the case, then it's still not AI that's our downfall, it's us that's our downfall. But I'd also counter with we've never actually ended up creating our own downfall to now, so to think it'll be different this time, even with highly advanced tech like AI, I don't see coming to fruition. There will absolutely be issues, problems, worries as there is with things like social media now and how society interacts, but then a lot of good comes from these things too – there's always some kind of counterbalance. I just don't see that changing, even with something as powerful as AI. While we can see the worst in humanity we can also see the best in it too.
Thanks Sam, I like your positivity but I cannot help but think it is misplaced. I do not see it as seeing the worse or the best in humanity, simply the reality - which will be a combination of the two. I can see how any fault of AI will inherently be our fault, thus our downfall of our own making - a re-watch of the excellent Westworld series reminded me of that recently. Regarding our own downfall - actually it has happened, many many times over (there is also the question as to whether we have done all of this before already, multiple times perhaps, but that is still rather sci-fi). I see AI as being a significant, but not the only, part in the downfall of civilisation, not necessarily the end of existence on the planet, though perhaps human. It will be more akin to us 'Roman-ing' ourselves and see everything created, both beautiful and ugly, fall to ruins. Many empires have come and gone, it could be that this time is indeed no different as the balance shifts between regions.
The Einstein quote of how WW4 will be fault is how I inevitably see this current mix of technological advance in the hands of those who cannot control themselves and social unrest to end.
I disagree on your points regarding the likely 'non-problem' that will be (is) AI. Granted AI is no new phenomena, it has existed since Pong (what a game!) and arguably before in various guises.
I also accept that most people are scared of losing control to a machine and that this unto itself is not the largest concern and is, empirically speaking, misplaced fear.
However I think for the more discerning individuals out there, that truly understand the tech-human fusion, the biggest problem is losing control to the humans that control the AI and who will ultimately misuse it.
As power concentrates in the hands of the few (companies, individuals, dynasties etc) we are increasingly at the mercy of those controlling the flow of information. History is being rewritten infront of our eyes. As each technological step forward occurs, its benefits will quickly fade as the technology is co-opted and subsumed into the belly of the larger corporate or big-government beast.
We are doomed not because of Terminator, but becuase of the power controlling a 'Terminator' (automated food dispensing, 'intelligent redistrubution' of our funds etc) and the inherent bias that will exist in the code.
Open Source AI carries far fewer risks to inherent misuse, however I cannot see people putting up with the inconveniences that often come with OS. Most will, lazily but understandably, far rather an integrated and convenient 'it just works' route than something that requires their time and effort to understand - an "Apple of AI" if you will. And that will be the significant issue with AI.
AI will be our downfall, if something else doesn't beat it there first, but it will as ever be human error and not AI that is at the core of our demise.
Good points. And if that's the case, then it's still not AI that's our downfall, it's us that's our downfall. But I'd also counter with we've never actually ended up creating our own downfall to now, so to think it'll be different this time, even with highly advanced tech like AI, I don't see coming to fruition. There will absolutely be issues, problems, worries as there is with things like social media now and how society interacts, but then a lot of good comes from these things too – there's always some kind of counterbalance. I just don't see that changing, even with something as powerful as AI. While we can see the worst in humanity we can also see the best in it too.
Thanks Sam, I like your positivity but I cannot help but think it is misplaced. I do not see it as seeing the worse or the best in humanity, simply the reality - which will be a combination of the two. I can see how any fault of AI will inherently be our fault, thus our downfall of our own making - a re-watch of the excellent Westworld series reminded me of that recently. Regarding our own downfall - actually it has happened, many many times over (there is also the question as to whether we have done all of this before already, multiple times perhaps, but that is still rather sci-fi). I see AI as being a significant, but not the only, part in the downfall of civilisation, not necessarily the end of existence on the planet, though perhaps human. It will be more akin to us 'Roman-ing' ourselves and see everything created, both beautiful and ugly, fall to ruins. Many empires have come and gone, it could be that this time is indeed no different as the balance shifts between regions.
The Einstein quote of how WW4 will be fault is how I inevitably see this current mix of technological advance in the hands of those who cannot control themselves and social unrest to end.