Hi Sam, I’m well into my 81st year and have spent all of my working life involved in cutting edge computer and communication technology. Thermionic valves and other discrete components were all the rage when I started - I can for instance explain the operation of a variable-mu pentode or the non woke rhyme used to remember the colour coding for resistors (BBROYGBVGW) - so my journey over the past 60 plus years has been nothing less than amazing. The majority of the advances in technology have been incredible even for those of us closely involved in that technology, but the advances in AI, and the ever increasing speed of those advances, are simply breathtaking and almost unbelievable. If what I read is correct we can expect AI to move to AGI (when it will be capable of operating at human expert level across multiple technologies) to super intelligence (when it will far exceed human capabilities both physically and mentally) within the next decade - or quite possibly sooner.
The difference for me between the “old technology” that I have lived through and AI, is that I always felt we were in total control of our destiny and was excited and challenged by the potential benefits of the former whereas I’m excited but fearful - almost frightened - of the potential of super intelligent AI (ASI) to disrupt, in a bad way, the human condition, quite possibly in ways we have yet to understand or even to consider possible.
Am I being overly pessimistic? Perhaps I am, but it’s the SPEED of progress that’s really scary. We humans are a clever bunch, and my hope is that we can stay ahead of the technology and find ways to control ASI such that it’s always subservient to human control.
However, even if “the good guys” are nimble and smart enough to retain control, what about the many bad actors out there who could possibly obtain the capability to release the awesome power of ASI?
The questions I ask myself are:
1. How can we make sure this can never happen by accident or design? Are we clever enough to stay ahead of the game especially as the speed of the game increases?
2. Can we trust Big Business or Governments to sort this out? I fear the first will be constrained by commercial interests and the second by incompetence.
3. Am I overstating the potential for harm to the human race? I don’t believe I am and I’m not the only one; As far back as 2014, Professor Stephen Hawkins could see the potential for ASI to erode and eventually replace humans as the dominant species on earth. Please Google (or should that be ChatGPT) to read was he said.
I am of course beginning to understand the huge potential ASI has to enhance and improve the human condition across most (all?) of our activities but I can’t help this potential for good to be overshadowed by the other side of the coin. BTW: Although not the primary concern raised in this post, but what’s good for one person might not be good for another. I’m thinking of ASI in a conflict situation here.
Please let me know if you think I am overreacting. To convince me you will need to answer the three questions posed in my text!
Great comment and thoughts! So you know what, I'm going to take this into our next edition of AI Collision, and look at your three questions and see what answers come our of my grey-matter
About Biden - I read online that rich Democrat contributors are thinking of not paying for his campaign and Obama is working behind the scenes to replace Biden in the race. We'll see what happens at the convention.
Hi Sam, I’m well into my 81st year and have spent all of my working life involved in cutting edge computer and communication technology. Thermionic valves and other discrete components were all the rage when I started - I can for instance explain the operation of a variable-mu pentode or the non woke rhyme used to remember the colour coding for resistors (BBROYGBVGW) - so my journey over the past 60 plus years has been nothing less than amazing. The majority of the advances in technology have been incredible even for those of us closely involved in that technology, but the advances in AI, and the ever increasing speed of those advances, are simply breathtaking and almost unbelievable. If what I read is correct we can expect AI to move to AGI (when it will be capable of operating at human expert level across multiple technologies) to super intelligence (when it will far exceed human capabilities both physically and mentally) within the next decade - or quite possibly sooner.
The difference for me between the “old technology” that I have lived through and AI, is that I always felt we were in total control of our destiny and was excited and challenged by the potential benefits of the former whereas I’m excited but fearful - almost frightened - of the potential of super intelligent AI (ASI) to disrupt, in a bad way, the human condition, quite possibly in ways we have yet to understand or even to consider possible.
Am I being overly pessimistic? Perhaps I am, but it’s the SPEED of progress that’s really scary. We humans are a clever bunch, and my hope is that we can stay ahead of the technology and find ways to control ASI such that it’s always subservient to human control.
However, even if “the good guys” are nimble and smart enough to retain control, what about the many bad actors out there who could possibly obtain the capability to release the awesome power of ASI?
The questions I ask myself are:
1. How can we make sure this can never happen by accident or design? Are we clever enough to stay ahead of the game especially as the speed of the game increases?
2. Can we trust Big Business or Governments to sort this out? I fear the first will be constrained by commercial interests and the second by incompetence.
3. Am I overstating the potential for harm to the human race? I don’t believe I am and I’m not the only one; As far back as 2014, Professor Stephen Hawkins could see the potential for ASI to erode and eventually replace humans as the dominant species on earth. Please Google (or should that be ChatGPT) to read was he said.
I am of course beginning to understand the huge potential ASI has to enhance and improve the human condition across most (all?) of our activities but I can’t help this potential for good to be overshadowed by the other side of the coin. BTW: Although not the primary concern raised in this post, but what’s good for one person might not be good for another. I’m thinking of ASI in a conflict situation here.
Please let me know if you think I am overreacting. To convince me you will need to answer the three questions posed in my text!
Regards, Martyn
Great comment and thoughts! So you know what, I'm going to take this into our next edition of AI Collision, and look at your three questions and see what answers come our of my grey-matter
About Biden - I read online that rich Democrat contributors are thinking of not paying for his campaign and Obama is working behind the scenes to replace Biden in the race. We'll see what happens at the convention.