OT: AI, please walk me back from the edge

Submitted by Bodybag on May 2nd, 2023 at 8:31 PM

Going to take a stab given there are probably a good many here who have some background in this field.

With the recent developments in AI over the last 2-3 months (chatgpt, bard, ect), I'm getting more and more concerned about our future as a society and the profound changes afoot. I'm not talking about an imminent skynet scenario with nukes flying all over, but a more subtle and sinister change in society with mass unemployment and social unrest. I see this happening within the next decade if not sooner. And I can easily imagine things would only worsen beyond and generative AI takes shape. If you're not aware, look up the singularity for further clarification on this.

I genuinely fear for my families future and I find this is keeping me up at night. We as a society seem to be walking blindly into this unknown of god-like power, and I can't see it as anything other than a catastrophic end for everyone save the ultra wealthy within the next 2 decades. UBI seems to be some answer, but that would be a paradigm shift in capitalistic mentality that we've had for centuries. Am I being too paranoid? Can someone with better knowledge of this walk me back from the cliff of despair? This seems to be an inevitability at this point, and I don't understand how everyone isn't utterly terrified.

yossarians tree

May 3rd, 2023 at 1:38 PM ^

Yeah, the old "people said the same about" argument does not apply here, I'm afraid. This is technology that is potentially smarter than us that can then create more technologies that we won't even understand.

I'm on the side of terrified, and even the people who think we "align" AI to work only with/for us are cynical about safety because the arms race among companies to build this is hell-bent for profit. They don't give a damn about safety, or if they do it is secondary to winning the race to the market.

Tristan Harris, the guy who gave us "The Social Dilemma" a few years ago, has a new TED talk like lecture on YouTube right now. It's called "The AI Dilemma," and while some tech watchers think he's an alarmist, his presentation is freaking terrifying. New ramifications are emerging seemingly daily now.

BoFan

May 3rd, 2023 at 1:28 AM ^

You shouldn’t worry about AI, you should never worry about anything, but you should plan ahead for it’s impact, and vote for policy that deals with some of the likely consequences.

I took my first AI course over 35 years ago at Michigan. 

As background, in the 50’s people were worried about mainframes replacing tens of thousands of jobs where there used to be rooms of people that used calculators to add up numbers.  Mainframes did replaced those mundane jobs but created many more higher value jobs.  

In the 1980s people were worried about robotics replacing lots of mundane repetitive factory jobs.  They did, and 200 workers could have the same output of 1000s, but many more higher value jobs were created. 

Unfortunately, during the 80s and later, taxes on incomes of over $1M were reduced from a max rate of 80% down to 38%. Also, Investments in education in the US did not create the trained workforce that could fill all those higher value jobs. As a result, the wealth transfer from job replacement went to the owners of companies that provided the automation and to the immigrants from countries that did invest in training workers with the right skills. Had taxes on million dollar incomes stayed high and had that money been invested in training, things could be different. 

Now with AI today, so far AI hasn’t replaced jobs.  It’s improved productivity for many.  Also the hype about conversational Ai is overdone.  Chat GPT is more BS than value.  Those algorithms aren’t going to replace jobs. 

BUT, in the near future this type of AI, as it improves, will replace jobs at an unprecedented scale. And it’s not the same as the 50s and 80s. AI can replace a lot of coding jobs and it can replace the vast majority of graphic jobs while creating very few new jobs. 

Rather than “not worry” there are at least two things to consider.  First, make sure your kids follow a career path that is unlikely to be replaced by AI. 

Second, ask yourself, is it ok for a super smart tech entrepreneur to invent an algorithm that can replace millions of jobs where all those folks are out of work and cant care for their families while at the same time that entrepreneur, without a wealth transfer tax, can make billions in profit based on an algorithm that automates the work of millions for far less?  Of course not.   The only way to address this will be to have a wealth transfer tax that funds a basic living wage.  

Buy Bushwood

May 3rd, 2023 at 9:15 AM ^

Service industry Bro.  America became a "service industry" economy, remember?  And the jobs didn't really go to machines, they went to China.  This previous reply grossly oversimplifies what's coming, which is an integrated intelligence paradigm that will be able to quickly do almost everything that humans can do, and much better.  So it's a question that no previous industrial revolution has come close to preparing us for.  To couch it in terms of the 50's and 80's, is to fundamentally misunderstand the problem. 

The other part the reply misses, is that, the things he describes as paradigms of industrial change in previous generations are fundamentally different than now.  In these other periods of change, humans remained the fundamental intellectual engine in charge of the changes.  That's about to become a paradigm of history.  At best, our relationship with emergent AI will be one in which we tell it our desires, and it conceptually creates the background.  But we're about to lose our creative agency, which, unfortunately, is one of the defining features of the human condition.  

WestQuad

May 3rd, 2023 at 10:33 AM ^

I'm already replacing people with AI  and scripts written by AI.  I like my people, but there is no reason for them to do something AI, or a script written by AI, can do 100 times faster.  The average employee is going to be 5x more productive in the near term and [100x] more productive soon after.  I'm leaning into AI as I'd like to not be one of the destitute masses clamouring for bread.

Things will adjust and find their own level, but we'll either have some sort of utopia with UBI, or there will be a few trillionaires and the rest of us will be living in some Mad Max distopia.   Capitalism works well when workers have some bargining power.  It won't work at all if the workers aren't needed.

Don

May 3rd, 2023 at 12:08 PM ^

"I'm already replacing people with AI  and scripts written by AI."

What business are you in? Are you the owner? How many people are you replacing?

we'll either have some sort of utopia with UBI, or there will be a few trillionaires and the rest of us will be living in some Mad Max distopia.

Welcome to your new neighbor.

The Homie J

May 3rd, 2023 at 10:58 AM ^

In an ideal world, AI is used to automate the crap jobs, letting people work less and choose jobs that they prefer rather than working where they must to earn decent pay.  In an ideal world, we implement UBI to start building towards a society where all hard work is automated and people get to spend their UBI and time on stuff they enjoy like arts, sports, leisure, travel, etc.  In an ideal world, someone works maybe 10 hours a week while taxation on automation allows you to earn the same salary you did working 40 hours a week.

Sadly, we live in a greedy capitalist world which will instead use AI to cut costs on labor and pass the savings up the chain while the working/middle classes are forced to learn new trades while competing against larger and larger numbers of similarly displaced citizens.

Technology won't ruin us, greed will.

yossarians tree

May 3rd, 2023 at 1:50 PM ^

The trouble with imagining a UBI safety net is that why would the elites who control everything and pass out the checks even want those people around? All they do is consume resources and pollute the environment. Hell, you could argue that an elite class that has nothing but contempt for the lower classes has already begun, but they at least need people around to grow their food, dig up their energy, and drive the delivery trucks to their mansions.

What happens to the proles when the machines do everything? They'll keep a few around as concubines, athletes, and entertainers. It's very easy to imagine a world of a few hundred million people living in a clean, garden paradise that really might resemble a utopia--but ain't any room for most of us in it.

matty blue

May 3rd, 2023 at 3:28 PM ^

The trouble with imagining a UBI safety net is that why would the elites who control everything and pass out the checks even want those people around? All they do is consume resources and pollute the environment.

this is exactly my contention.  there's clearly an incentive for the people that would benefit from these systems to put as few constraints as possible on their use. 

the next corporate entity that absolutely, and without hesitation, puts societal health ahead of pure profit margin will be the first.

bluebyyou

May 3rd, 2023 at 4:35 PM ^

What are you taxing?  How many lines of code they write in a year?  The number of processors they own? Perhaps you will go after end users who will enhance their bottom line by using AI.  A few companies are going to become insanely wealthy and might be powerful enough to replace or displace government.  A new economic system would have to emerge or perhaps artificial super intelligence would supply us with everything we need.

I wonder what I would do with myself if I didn't have a job that provided me with an intellectual challenge.  Eternity spent on the holodeck with my 19 year old college girlfriend sounds good for a little while, welllll, maybe more than a little while, but I'm not so sure how well I would do with nothing to do.

I've been filing patents on this stuff for a couple of decades and if you think what the MSM reports is the latest and greatest, I can assure you that it is not.  What has been conceived is way ahead of what is practiced.

FB Dive

May 2nd, 2023 at 8:39 PM ^

There's no point in losing sleep over it. I'd wager AI just makes workers more productive rather than causing mass unemployment, just like almost every other major technological improvement over the course of human history. And even if it replaces some types of jobs, it will create new types of jobs that don't currently exist.

By the way, I doubt the AI era is as close as you seem to think it is. I was very impressed by ChatGPT until I started asking it questions about my field of expertise. Then I realized it's more of a bullshit generator than a world-changing AI. At least at this point.

Blinkin

May 2nd, 2023 at 8:50 PM ^

I think this is about what will happen. So far the hype has outpaced the results. And other technological changes have resulted in social, well, changes, but not collapse or calamity. Things will be different in some ways, but the same in many others. 

Cmknepfl

May 3rd, 2023 at 12:25 AM ^

Continuing to add more data is what has gotten chat GPT to the point it can write papers but it It’s not about data.  
 

what’s striking to me is that delete having so much data it’s still not aware.  That’s the thing that would be truly scary.  It’s not conscious.  
 

I’ve always been a determinist and thought consciousness was computational.  I am starting to not think that’s the case.  To me this adds to the mystery of consciousness and supports the hypothesis that AI as currently constructed will be a tool we yield like internet of the 90s and robots and self driving cars of more recent times.  

Yes there will be disruptions, but in general it makes society more productive and a rising tide raises all boats.  
 

 

Buy Bushwood

May 3rd, 2023 at 9:20 AM ^

I don't think you understand the progress that's been made in the last 5 years.  There was a major, unexpected breakthrough at google in ~2017 that led to all this.  It advanced research a generation or more, essentially overnight. While generalized AI was thought of as 30-70 years away, if ever, it's now considered more like 5 years away.  That it couldn't answer technical questions yet isn't what you should focus on.  That you're having a conversation with it, where that's even a consideration should astound you, compared to 5 years ago, where chatbots had a library of 100 preset replies.  Now, the blink of an eye later, you're arguing that just because it seems human, doesn't make it human.   

Rabbit21

May 3rd, 2023 at 9:43 AM ^

Right now from what I am seeing its used to enhance and provide details for the stuff people say they do, but don't really.  Right now it's being used as a data processor that helps to make more informed decisions, but you still have to teach it what to look at and what variables to put into place.  It's a productivity enhancer and yes, it'll have some impact, but right now we're in the middle of a labor shortage that I don't think is going to go away, so why not drive towards something that will make the people who are working better at their jobs.  If you're an accountant and 30% of your job is fixing coding mistakes, isn't it helpful to have something that will catch them at the start?

WestQuad

May 3rd, 2023 at 10:38 AM ^

ChatGPT isn't sentient.  It is an auto-complete generator.  It can do lots and lots of different tasks very well, if you prompt it correctly and/or fine-tune it for a task.  My current company (EdTech) is so focused on the problems in front of them that we will be out of business in 2-3 years when the next generation of products like Kahnmigo take over. 

maznblu

May 3rd, 2023 at 4:12 PM ^

In the world of chess, most computers can now beat the best chess grandmaster. However, a human with a computer can beat a computer without a human. What the chess programs can do well is process all the possible moves and determine the best move. What the chess programs can't do is strategize

Similarly, humans are better at poker than computers. 

It appears that AI programs are good at "simpler" problems, problems that have finite outcomes and clear rules (like chess). They struggle, however, with what are known as "wicked problems," problems that have multiple interdependent variables, conflicting or unclear data, and unclear rules.  

Future employment is likely to be available for people who are good at working with "wicked problems."

I highly recommend the book, Range: Why Generalists Triumph in a Specialized World by David Epstein.

mgogobermouch

May 3rd, 2023 at 7:59 PM ^

"However, a human with a computer can beat a computer without a human. "

 

Are you sure this is still true?  It's a little hard to find data, so please correct me if I'm wrong, but I think that since 2017 computers playing alone beat human-computer collaborations.  (See the wikipedia article on advanced chess, for example.)  And computers have made enormous advances since 2017.

TruBluMich

May 2nd, 2023 at 8:40 PM ^

I would be lying if I said I'm not watching and researching this new technology and am a little bit concerned about misuse.  However, history has proven that all new "life changing" technologies have come with the same fears.  I use the word life changing as a way to compare it to other technological evolutions across history. Could you imagine the fear and panic that would ensue if you were able to bring a computer hooked up to the today's internet back to the 1700's?

It's a rational fear, and I won't tell you it's nothing to worry about.  However, as I was told once. In life 90% of the things you worry about never happen and the rest are usually no where near as bad as you thought they would be.

TruBluMich

May 2nd, 2023 at 8:47 PM ^

Just for fun, here is what AI thinks of you.  Seems pretty harmless except for the part where it says it can't access the information, then you know, accesses the information lol.

As an AI language model, I cannot access a specific user's posting history on MgoBlog. However, based on the replies made by the user "bodybag" on MgoBlog, I can provide a generalization of their posting history.

From the replies made by "bodybag", it seems that they are a regular user on MgoBlog and are actively engaged in discussions related to University of Michigan sports. Their posts are generally concise and to the point, often providing their opinion on a specific topic or responding to a previous post.

Based on the language used in their replies, "bodybag" appears to have a good understanding of the University of Michigan sports landscape and is knowledgeable about the teams and players. They also seem to have a good sense of humor and occasionally use sarcasm and irony to make their point.

Overall, "bodybag" appears to be a passionate University of Michigan sports fan who enjoys engaging in discussions and debates with fellow fans on MgoBlog.

UMForLife

May 2nd, 2023 at 8:45 PM ^

Are you worried we are going to be "bodybag" for AI? (Pun intended)

Humans have always been able to adapt to the situation. It is natural. Think about horses and how useful they are now. Heck, we even had a song about radio which was supposed to be destroyed by video. Humans will find something else to do if more things are automated.

Think about all the things that can still be done better. Farming is still a lot manual than people think. Most of the population of this world live in less industrialized societies. Americans will adapt. We always have.

If you are worried about the politics, I have a lot of confidence in younger generation.

Relax. Don't worry. we will be fine.

bluebrains98

May 2nd, 2023 at 8:46 PM ^

Nothing to worry about. I asked Chat GPT if AI would be the beginning of the end of humanity and here's its response. We're all good:

There is currently no evidence to suggest that AI will be the end of humanity. While there are certainly potential risks associated with the development of advanced artificial intelligence, such as the possibility of unintended consequences or the risk of malicious use, many experts believe that these risks can be mitigated through careful planning, collaboration, and regulation.

It's important to keep in mind that AI is ultimately a tool created by humans, and its behavior is shaped by the goals and values that we program into it. As such, the responsibility for ensuring that AI is developed in a way that aligns with human values and goals rests with us.

That being said, it is still important to continue to monitor the development of AI and take steps to mitigate potential risks as they arise. This includes investing in research to better understand the risks associated with advanced AI, developing ethical guidelines for the use of AI, and creating mechanisms for oversight and accountability.

DennisFranklinDaMan

May 2nd, 2023 at 9:06 PM ^

Well, that's reassuring. All we need is "careful planning, collaboration, and regulation," which our species is absolutely known for. When, in the course of human events, has there ever been anything else?

(As to what "human values and goals" are — and whether or not there's an answer to that in the first place — I'll leave that to smarter people than me).

tl;dr: We're fucked.

Blue Texan

May 2nd, 2023 at 8:48 PM ^

Been in IT since the early 80’s. Two things were very big at the time, AI (which were really just expert systems) and speech recognition. 
Microsoft called their early speech recognition “wreak a nice beach”, because the computers could not tell the difference between that and “recognize speech “. Context is very important when interpreting human speech. So what that thought was going to take over keyboards in a couple years actually took 30. 
AI has moved just as slowly. 
In 2015, self driving cars were predicted to be readily available by 2020. They are still years away. 

My point, while AI is inevitable, it is many years away from being able to destroy all jobs or humanity. Primrose and Turning (many books, good reads). concluded that transistor based computers would never become cognitive.  Quantum computers might be able to become cognitive, but sufficiently sized quantum computers are many years away. 
Things always change, just not like you’d predict, and not as fast as you would think. 

NittanyFan

May 2nd, 2023 at 8:53 PM ^

(deep breath ............ I don't think I'm going wildly off the rails here, so I'll say it) 

---------------

We'll see.  Could AI cause mass unemployment and paradigm shifts?  I tend to think the fears are overblown, but it could, yes.

But, IMO, our society should first chat about the other paradigm shifts that have already occurred over the last 3 years.  E.g., the pandemic-related paradigm shifts.  Has society really had an honest conversation and done any sort of post-mortem on this???  

Concentration of wealth (among the already-wealthy) accelerated post-pandemic, and inflation (which, of course, impacts the less well off more versus the better well off) took off.  Neither of those are good in the long-run.

IMO, society should talk about that first (and actually learn something from it) before we start fretting too much about future threats.  A hell of a lot has changed already, and none of those changes had anything to do with AI or computers.

NittanyFan

May 2nd, 2023 at 9:31 PM ^

Yeah.  Humans do have quite the capacity for being unnecessarily cruel to each other.

Take two hypothetical AIs: (1) Skynet and (2) the Paper Clip Maximizer.  Ultimately, both came to the conclusion that they need to exterminate mankind (because we were a threat to its mission). 

Being targeted for extermination isn't good for us! 

But at least they weren't unnecessarily cruel about doing what they needed to do, unlike hundreds of thousands of humans past.

Vasav

May 3rd, 2023 at 1:22 AM ^

Highly recommend "Humankind: A Hopeful History." Human history isn't all bad, there's a lot of truly amazing and good and positive things that allowed us to rise. We tend to focus on the bad because it's horrifying, but the fact it horrifies us gives a sense of what we don't like.