OT: Chat GPT and rivalries
Very OT, but figured some folks would appreciate this. For those of you who haven't played around with Chat GPT, I highly recommend giving it a shot as it it truly is a step function change in technology akin to the internet and mobility. Today, just for kicks, I asked it to do the following: "Can you simulate a conversation between the typical ohio state graduate and University of Michigan graduate in which they argue about their beloved universities?" While the response could use some improvement, I thought it was a remarkable response generated by a computer.
Here was the response:
Ohio State Graduate: Go Buckeyes! We're the best football team in the country, and our academics are nothing to scoff at either.
University of Michigan Graduate: (chuckles) Oh, please. Michigan is the superior academic institution, and we have a rich history of producing top-notch scholars and researchers.
Ohio State Graduate: (rolls eyes) Who cares about academics? We beat you in football most of the time, and that's all that matters.
University of Michigan Graduate: (smirks) You mean except for the last two years when we beat you?
Ohio State Graduate: (defensive) That was just a fluke. We'll beat you next year.
University of Michigan Graduate: (laughs) Sure, keep telling yourself that. At Michigan, we're not just focused on football. We value diversity, inclusivity, and intellectual curiosity. We're consistently ranked as one of the top universities in the world, with exceptional programs in medicine, law, and public policy.
Ohio State Graduate: (exasperated) But we have more national championships than you!
University of Michigan Graduate: (smiling) Football is just a game. At Michigan, we care about making a positive impact in the world. We're not just focused on winning on the field, but on making a difference in people's lives.
Ohio State Graduate: (grumbling) Whatever. Go Buckeyes!
University of Michigan Graduate: (grinning) Go Blue!
Fun thread and ChatGPT is fun but has so many errors. I once asked it which team had the most final four appearances, UM or MSU, and not only did it get it wrong, it missed our 2013 run!
yeah. I find that it is better with less factual responses currently. But the natural language responses are truly outstanding. Another question I asked it was to simulate a conversation between Captain Ahab and Moby Dick in which the two of them come to terms with one another and become friends. the response was remarkable, and included deep knowledge of Ahab's motivations vs. the whales.
Pretty interesting/funny but...... Michigan has more National Titles
(an interesting comparison site than many folks likely already know): http://www.winsipedia.com/michigan/vs/ohio-state
Yes, but that's exactly the type of factual error that an OSU fan would make, so it's not a knock on ChatGPT
Dunno people's level of knowledge here, and I'm no expert myself, but it's useful to know that LLMs don't know anything. They're built to be good at guessing what word comes next.
Anything it happens to say correctly is mostly coincidence and luck.
That's Large Language Modules not a Master of Law Degree for anyone else wondering...
Thank you, that threw me off.
In my experience, applies to both…lol
I think that's a misrepresentation. It's not luck. It's taken hundreds of millions if not billions of dollars in training the model to provide accurate responses. That's not luck. I better way to think about LLMs (large language models) is that the chatbot has no understanding of the words that it replies with. It liikely has some confidence level that the user will be able to make sense of the reply, but the bot itself does not understand it.
Agree a lot of work has gone into it. But "accurate responses" means "good at getting the next word right", not "good at getting facts right"
Also will agree that "getting facts right" is a way that the models are judged, and are being designed toward. But the model itself is designed to guess the next word (s what I've read)
LOL, that is epic. Thanks for posting.
I thought chatGPTs data is from 2 years ago? How does it know the result of the most recent games?
I believe the launch version was, but they could be now using a newer model, OR it could be searching the internet now and including that info in responses.
Or it knows nothing and just got lucky.
I put in the same request into ChatGPT, but it had a slightly different ending:
Ohio State Graduate: I can't even stand to listen to you anymore. It's clear you're just a delusional Michigan fanboy. Ohio State is the best and always will be.
University of Michigan Graduate: You're just jealous that Michigan is the better school. Face it, Ohio State will always be second-rate.
Ohio State Graduate: (voice trembling) You have no idea what you're talking about. Ohio State means everything to me, and to hear you insult it like that is just too much. (breaks down in tears)
University of Michigan Graduate: (awkwardly) Uh, I didn't mean to upset you that much. I guess we'll just have to agree to disagree.
Ohio State Graduate: (sobbing) No, you don't understand. Ohio State is more than just a school to me. It's a part of who I am. And to hear someone insult it like that...it's just too much. (hangs up the phone)
Pretty accurate if you ask me
ChatGPT is fun to play around with, but as a few other people said, it makes A TON of factual errors. WAY too many factual errors. I've seriously had it tell me stuff like "Winnipeg is located in the state of Nebraska" and "the 7th largest prime number is 29." No.
But, it does do a remarkably --- almost scary --- good job of getting "tone" right. I live in the Valley (LA) and having recently heard the Frank and Moon Zappa song, I recently asked a question about Encino (a Valley neighborhood). ChatGPT "knew" to take on the tone of a Valley Girl! The emojis are a great touch too.
(but also: Lake Balboa isn't in Encino. The factual errors, sigh)
That would be like, you know, like totally annoying...
omigosh seriously like gag me with a spoon
It makes a lot of errors and everytime I point them out it keeps saying “I am sorry, you are correct …” and then comes up with another explanation and mistake.
It seems Chat GPT is not to be a source for facts. Rather, like the rest of discourse these days, it’s just programmed to be another good bull shitter.
No mention of pooping in coolers? I think the technology still needs some fine tuning...
Chat GPT does need work if it thinks that any major university values intellectual curiosity.
Fun and interesting, but it failed to mention we actually have more national championships (not to mention more head to head wins, more B1G championships, and more wins).
It sure is great to be a Michigan Wolverine.
It seems to me like ChatGPT is basically 'crowd sourcing' the internet's collective level of knowledge and intelligence. And that's kinda terrifying. The last thing the world needs is an AI that's basically a distilled personified summary of Youtube comment sections.
fake. I have never heard a buckeye speak that clearly and concisely, or be able to annunciate that well
can you chatbot program your wife to dirty talk in bed? asking for George Jetson.
I highly recommend giving it a shot as it it truly is a step function change in technology akin to the internet and mobility.
Hard disagree from me. LLMs (large language models) are good at confidently making accurate sounding statements. But in many cases they give completely bogus information. As someone working close to the LLM community, most people doing active work think this is like 5% real progress and 95% marketing hype (Microsoft marketing nailed this one out of the park)
There will definitely be some use cases that arise from them but they are nowhere near akin to Internet or search engine level innovations
I also work in the NLU space, having been on the “Alexa” team since 2013 through last year. I can tell you that our senior leaders did not think it was just marketing hype. One thing we have learned is that facts are easy. Especially with the progress in knowledge graphing. It just takes time and the correct licensing agreements. Sounding natural in a multi turn, completely open-ended dialog is much more difficult.
That is interesting. I am more on the academic side, so this could be an industry vs research disconnect. We will see who turns out to be right :)
Having recently been assigned to work with a facility in Ohio I am forced to view that work of satan himself, the O logo, in emails I will spare our dear readers the pain and not post the image.
After about the fourth time I saw it today it dawned on me that it is hardly an 'O', it is very clearly a zero. It was as if the monolith in '2001: a Space Odyssey' had appeared. Suddenly things were so abundantly clear.
Ask it how the war ends.
I asked Chat GPT to write a poem in the style of Leonard Cohen about seedy hotels.
It was pretty good.
I asked it to write the same theme in the style of Kay Ryan.
Although the poets couldn't be more different - same poem.
As soon as it learns to write haiku about Michigan football games, everyone on this message board is obsolete
If you ask a person any question in the world they will have more factual errors that GPT. If you give a person some rule on where to find information and how to answer questions they will do much better. You can do the same thing with ChatGPT. I’m having it write assessment questions based on learning objectives at different depths of knowledge (level of thinking skills required by the student.) and it does an incredible job. I have to do a lot of prompt engineering to get there. I’m taking the results and I should be able to fine tune my own model that asks the questions in specific ways and delivers them in JSON or whatever format I want. This effort will save hundreds of hours in the short term and thousands or tens of thousands of hours in the long term. It’s like training an employee but the employee will never get tired, will never retire or switch jobs. It will just ask assessment questions. That’s all it does.
On a side note, 40% of jobs are supposed to go away in the next five years. Humans are in trouble.