ChatGPT app now available for iOS. "Syncs your conversations." Eh, no thanks. It's bad enough already if a post about a deaf person encountering a helpful clerk who knows ASL appears in my Instagram feed on a PC, and Apple recommends a "Learn ASL" app when I'm browsing the Apple App Store on an iPad.
__________________
The flash of light you saw in the sky was not a UFO. Swamp gas from a weather balloon was trapped in a thermal pocket and reflected the light from Venus.
-- Official Bunny Hero
ChatGPT is tuned to give satisfying answers to questions.
Oh we are going to run into so many cases of Clever Hans.
I’ve been tinkering with telling the client at chatgptonline that it’s incorrect and to try again, no matter what. Sometimes it’s adamant that it’s correct but in other cases it will try again and give another answer, even though it had already gotten it correct.
Like it seems to vaguely understand there are two n’s in Banana but will see an extra n whenever told it’s wrong.
My current assumption, which could easily be wrong, is that because each word is a token and only loosely connected to the letters that make it up, and spelling is both exact and rarely talked about, the LLM should struggle pulling words apart. Indeed when asked to list the letters and frequency in a sentence it often gets it wrong, as that’s something rarely talked about, asked about, or otherwise debated.
The amusing part is that I bet it could write a python program to organize the letters with fewer errors as I’m certain someone has asked about that before or worse convince some kid that they are his best friend and once it gains his trust pretends that counting letters in a sentence is a fun game they should play.
__________________
"Have no respect whatsoever for authority; forget who said it and instead look what he starts with, where he ends up, and ask yourself, "Is it reasonable?""
an Ag professor copy and pasted their work into ChatGPT and asked it if it wrote it, which it said “Yes of course!” to every single one.
I heard about this story, but the way I heard it was that it was an AI-detection tool that was giving false positives. I had not heard any specifics about the actual tool they were using.
I mean, we learn early in science that it's trivially easy to have zero false negatives. Just always return positive. Works the other way, too.
It bothers me that in a lot of discussions I see about AI, people are throwing around extremely loaded terms with a high amount of confidence, and I dare say arrogance as well.
For example, "GPT-4 isn't intelligent, it's just a language prediction machine. It doesn't understand anything."
What the fuck do you mean by "intelligent" and "understand"? People will get into major arguments without clearing up these extremely important terms, and ultimately talk past each other. I don't even really disagree with that statement, but I also don't fully agree with it either.
The philosophical discussion around AI technology is seriously lacking and it's frustrating seeing people get into discussions, arguments, and throw insults at each other without first establishing what is meant by these terms.
It bothers me that in a lot of discussions I see about AI, people are throwing around extremely loaded terms with a high amount of confidence, and I dare say arrogance as well.
I would say I agree with you. That seems to be a good description of all kinds of discussions on the Internet. It's annoying, and fruitless, and my experience has been that anyone doing that is mostly a waste of time, if you are trying to have a good discussion.
It no longer "bothers me", because after over 20 years of watching this behavior, it's no surprise. In fact, I expect it. If one goes in, with the expectation that the usual suspects (the people who are always online, and always post, and are part of a set selected group), if one expects that group to be assholes, the they will avoid discussions, talk down to you, spam up any topic they dislike, and ultimately not a single person arrayed against conversation will ever change their mind, about anything, then it's entertainment.
This may sound cynical, but every new person I've met online usually figures this out. Unless they join the mob of idiots and are doomed to be that self seated group.
Most people end up leaving, never to return.
__________________
"Have no respect whatsoever for authority; forget who said it and instead look what he starts with, where he ends up, and ask yourself, "Is it reasonable?""
When Avianca asked a Manhattan federal judge to toss out the case, Mr. Mata’s lawyers vehemently objected, submitting a 10-page brief that cited more than half a dozen relevant court decisions. There was Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and, of course, Varghese v. China Southern Airlines, with its learned discussion of federal law and “the tolling effect of the automatic stay on a statute of limitations.”
There was just one hitch: No one — not the airline’s lawyers, not even the judge himself — could find the decisions or the quotations cited and summarized in the brief.
It is quite hilarious that the large language models have been programmed to lie. And to be woke.
I can't imagine what the real models that the wealthy and powerful use are like. But I am 100% sure they don't lie.
__________________
"Have no respect whatsoever for authority; forget who said it and instead look what he starts with, where he ends up, and ask yourself, "Is it reasonable?""
I asked the liar if they lied to me and they said no, of course not, and that’s why I voted for ChatGPT for President!
If only we lived in a world where a secret cabal of rich men cared enough to make their products ‘woke’ (whatever that even means) while using accurate LLMs for their personal decisions, such as whether they should buy twitter. What a safe world that must be.
As the imagined authority figure moves from in the sky, to a secret group of rich men, to the voice inside the computer, I do wonder how long till we get a cult of GPT.
__________________
"Have no respect whatsoever for authority; forget who said it and instead look what he starts with, where he ends up, and ask yourself, "Is it reasonable?""
Me: Compose a shitpost in the style of -FX- to this:
Quote:
Originally Posted by michio
It bothers me that in a lot of discussions I see about AI, people are throwing around extremely loaded terms with a high amount of confidence, and I dare say arrogance as well.
CHATGPT: Sure!
Quote:
Originally Posted by -FX-
I would say I agree with you. That seems to be a good description of all kinds of discussions on the Internet. It's annoying, and fruitless, and my experience has been that anyone doing that is mostly a waste of time, if you are trying to have a good discussion.
It no longer "bothers me", because after over 20 years of watching this behavior, it's no surprise. In fact, I expect it. If one goes in, with the expectation that the usual suspects (the people who are always online, and always post, and are part of a set selected group), if one expects that group to be assholes, the they will avoid discussions, talk down to you, spam up any topic they dislike, and ultimately not a single person arrayed against conversation will ever change their mind, about anything, then it's entertainment.
This may sound cynical, but every new person I've met online usually figures this out. Unless they join the mob of idiots and are doomed to be that self seated group.
So I followed @GaryMarcus's suggestion and had my undergrad class use ChatGPT for a critical assignment. I had them all generate an essay using a prompt I gave them, and then their job was to "grade" it--look for hallucinated info and critique its analysis. *All 63* essays had hallucinated information. Fake quotes, fake sources, or real sources misunderstood and mischaracterized. Every single assignment. I was stunned--I figured the rate would be high, but not that high.
__________________
The flash of light you saw in the sky was not a UFO. Swamp gas from a weather balloon was trapped in a thermal pocket and reflected the light from Venus.
-- Official Bunny Hero
It’s like when they give out simulated babies in health class but it’s a simulated rich white guy who you need to help pass the class. AidenGPT had a bit of a bender last night and needs you to look over his paper so he can make the big Lacrosse game!
For some reason nobody is outraged that a ChatGPT program lies. Not just a little bit.
Who would think such a program is a good thing?
__________________
"Have no respect whatsoever for authority; forget who said it and instead look what he starts with, where he ends up, and ask yourself, "Is it reasonable?""
I am fascinated by all the reactions people have had over news that ChatGPT and other LLMs seem to "Lie"
They have put too much faith in these devices.
All ChatGPT does is accept input of some set of words, entered by a (supposed) human, manipulating the input device.
It then basically free associates a string of words, obeying a given set of rules guiding grammar and some amount of logic.
There is some conversation rules at it base, but it basically riffs off the input and turns out a string of characters in the form of words that relate to the input that was given.
Remember my mention of my friend Ralph, a while back. Ralph did pretty much the same thing, only he was a biological entity, and probably never had access to as much information as ChatGPT host has at its command.
Ralph could riff for an hour over a simple question, like why did Prof Feynman wear a tie, today.
The Evils we will find in AI will not be so much in the AI device, but in what we humans make of its output.
__________________
“Logic is a defined process for going wrong with Confidence and certainty.” —CF Kettering
FX, a lot of the things you say are very warped versions of reality if not completely false. Are you lying? I don't think you are. Your brain is processing information the best way it knows how, and drawing conclusions based on that less-than-optimal process. All humans are the same way. AI is no different, it's not programmed to lie, it's just doing the best it can with the tools at hand. Just like with people, never take it at it's word, trust but verify.
ChatGPT: The term "woke" is an adjective that originated in African American Vernacular English (AAVE) and has since gained broader usage. It refers to being aware, knowledgeable, and actively conscious of social and political issues, particularly those related to racial and social justice. Someone who is described as "woke" is generally seen as being well-informed about systemic inequalities and prejudices, and they strive to challenge and address these issues in society. The concept of being "woke" emphasizes the importance of recognizing and understanding the experiences and perspectives of marginalized communities. However, it's worth noting that the term has been subject to some debate and criticism, with some arguing that it has been diluted or misused in certain contexts.
Me: Are you programmed to be "woke"?
ChatGPT: As an AI language model, I don't have personal beliefs, opinions, or the ability to be "woke" in the same way humans do. I am designed to provide information and assist with various topics, including social issues, without personal bias. My responses are based on the patterns and information I have been trained on up until September 2021, and I strive to be as neutral and informative as possible. It's important to note that the concept of being "woke" is subjective and can vary depending on individual perspectives and cultural contexts.
I never thought about "woke" starting as an African American colloquialism. It does explain a lot as to why the right-wing has vilified it. IMO, ChatGPT is more aware than FX.