I have grudgingly accepted that metaphors and analogies are a thing, and "hallucinate" is a good description of the effect, even if it does not describe the mechanism.
__________________ Hear me / and if I close my mind in fear / please pry it open See me / and if my face becomes sincere / beware Hold me / and when I start to come undone / stitch me together Save me / and when you see me strut / remind me of what left this outlaw torn
I started to write a long post about how Copilot did a mediocre job creating a document, but then I realized about half the problem was the generic prompt I have it.
I only used it to get started on writing it, so it served that purpose — I knew what I didn't want to write.
I have grudgingly accepted that metaphors and analogies are a thing, and "hallucinate" is a good description of the effect, even if it does not describe the mechanism.
Hi seebs.
Maybe the circumstance you define, in which you must "grudgingly accept" the existence of parts of speech, is not truly a circumstance that necessarily dictates that you accept a less adequate or totally incorrect word to be redefined in a way to mean an entirely separate and different thing?
I love language, but, the English language is the only one I know.
I also know other topics, ranging from etymology to marketing, to poetry, to parody.
I know things about the US Constitution. The things I know are very specific and very tiny little tidbits that I am well aware no one else wants to even hear about, much less consider, much less accept.
But?
Men Tell Me Things.
Rich Men Tell Me Things.
THE AUDACITY - ohhh, they sure do have the audacity to be perfectly honest with me. What is my crazy broke queer autistic disabled ass supposed to do, TELL someone?
"NO ONE WILL BELIEVE YOU." - a few rapists, and/or rich men, and/or rich women, and/or lawyers, to my face, over the years, over and over. In my real life. In my Rael life, seebs.
But, no one cares. Shut up, Stopper. Yes, I'm familiar with the concept.
I don't even exist outside of this forum, or this internet. I am pixels. I am nothing but words on a screen, barely bits and bytes, a drop of text in a cesspool of confusion.
How could I be any kind of authority?
How could I claim anything at all, for which I can never provide evidence?
I have very strong views regarding language and marketing.
So if I say that I have feelings and opinions on words used in and around the English language, and/or slang, and/or marketing and advertising, and/or politics, and/or internet stuff, and/or CHAT BOTS, SEEBS - maybe, maybe - wait, was there ever any kind of, say, internet-based "place," such as a "chat room," or, say, an IRC channel, that maybe I had a role in, that maybe also had a relation to any kinds of CHAT BOTS?
Oh, maybe I have no idea what I have experienced or learned or how I feel.
ah well, what would a pixel like me be doing in a cesspool like this, anyway.
In the specific case of LLM chatbots, "hallucinate" is a reasonably apt description of the category of error. Yeah, we're aware that they are not actually sapient life forms with experiences, but the failure mode in question has a lot of parallels to hallucinations, in a way that wasn't an isssue with pre-LLM designs.
__________________ Hear me / and if I close my mind in fear / please pry it open See me / and if my face becomes sincere / beware Hold me / and when I start to come undone / stitch me together Save me / and when you see me strut / remind me of what left this outlaw torn
Reddit is now selling their content as AI training data for $60m a year, If this deal holds through say goodbye to the last skeletal remains of reddit, as it takes down an as of yet unknown AI company.
Since forcing out any mods that wouldn't tow the line Reddit is increasingly bot accounts that repost old content for karma, like the subs who's sole purpose is to repost content from other subreddits. Now all that garbage content is being sold to an AI company without the user's consent, which is likely the end for new creative content on reddit that hasn't been pilfered by an AI from AI ranked top posts on another platform.
In the specific case of LLM chatbots, "hallucinate" is a reasonably apt description of the category of error. Yeah, we're aware that they are not actually sapient life forms with experiences, but the failure mode in question has a lot of parallels to hallucinations, in a way that wasn't an isssue with pre-LLM designs.
Thanks, seebs, yes you are right. GUESS WHY I MISSED YOU.
In between posting my ridiculous rant against the use of "hallucinate," some time ago (?), and my reading your reply to my silliness tonight, seebs, I read about 4 or 8 other things about "hallucinate" and LLM/ChatGPT etc, and learned I was wrong, and, why.
Reddit is now selling their content as AI training data for $60m a year, If this deal holds through say goodbye to the last skeletal remains of reddit, as it takes down an as of yet unknown AI company.
Since forcing out any mods that wouldn't tow the line Reddit is increasingly bot accounts that repost old content for karma, like the subs who's sole purpose is to repost content from other subreddits. Now all that garbage content is being sold to an AI company without the user's consent, which is likely the end for new creative content on reddit that hasn't been pilfered by an AI from AI ranked top posts on another platform.
oh shit, Ari, I just read YOUR post. I fkn gotta go delete more shit now. Thank you for this post. My algos are ruined. Sometimes I get news-news, other times I get Beyoncé. Ok, a lot of Beyoncé. I've missed a lot of pop culture, so the algo is really helping me catch up. Have you heard of Club Shay Shay?
I was hardly on Reddit, anyway. Not after the sub drama, and the Moderator strike and all. But I was following some good subs. I gotta delete some shit if I can though, shit. I do not need that account.
Fuckin AI scrapin bullshit LLM nonsense. Maybe I won't ever try to put the Spellchecker back onto my interwebs. Fuck Reddit, fuck CEOs, fuck profits, oh and fuck Capitalism, too.
But that is only my own opinion. There are no laws against coveting.
Chat Gepetto-4, the Romance novelist, as printed by Raychelle Ayala. Now on Lonely Hearts Press. Press your lonely heart into your Chat Gepetto-4 subscription, and pay to watch this typist talk about AI and the Romance genre. Learn how to spew more nonsense and print it. It's cool to repeat the exact same book titles a few times, too.
Oh did I just write the above fake ad for real fake Romance novels and a class on how to "write" and "publish" the AI-generated results, by my own human self? It's more likely than you think.
Lonely Hearts press is, apparently, an imprint [I need to learn this topic, separately] in the Romance fiction genre that has no problem with AI, or with their author Raychelle Ayala bragging about her 100 Romance books written with AI, or so I am reading.
She will use ChatGPT-4 to print more AI books! She can teach you, but she has to charge. Plus, you must also pay to have ChatGPT-4. The author apparently has a whole entire Facebook group drevoted to using AI to be a fast writer, too.
A TwitterX thread is said to have been deleted following the human-written outrage.
March 2: From Meet-Bot to AI-Do: Crafting Romances in the Realm of AI with Raychelle Ayala. ZOOM ONLY at 1:00pm – 12:00pm MDT. ZOOM Room opens at 12:30pm MT.
Dive into the exciting world of AI-assisted romance writing with Rachelle Ayala, a multi-published romance author working on book #100.
Rachelle has been using AI for a year, and she’s happy to cut through the hype and show you insider tips–from sparking new story ideas to refining your manuscript, generating scenes, or penning compelling marketing copy.
This session is packed with live demos of how Rachelle uses AI, practical advice on how to get started, and a glimpse into the future of how AI will impact writing and storytelling.
Rachelle Ayala has degrees in Applied Mathematics and Computer Science and, as an indie author since 2010, has published more than 80 romances across multiple genres, including contemporary, romantic suspense, small town, and sweet romance. Rachelle has also published 11 non-fiction titles, including An AI Author’s Journal: From 0 to 70000 in 14 Days, Love by the Prompt: A Romance Writer’s Guide to AI-Powered Writing, and Writing Asian Romance Characters. She loves sharing what she has learned about incorporating AI into writing and publishing.
Non-CBC members are welcome to attend VIA ZOOM ONLY for the non-refundable fee of $10.
A [Meta (tbd)] Facebook post about this March 2 Zoom call about using AI to churn out crap that can't be copyrighted (she's making money, so it's okay) has been deleted. I haven't looked at for that Group, not gonna ruin my algo over it.
This right here is why I don't think ChatGPT in its current form is going anywhere without at least a couple layers above it or some heavy changes to the law.
To be a useful assistant sometimes verbatim data is exactly what we want, sometimes the answer just is a paragraph from an article and training the system away from that because it exposes all the scrapped copyrighted information is going to limit how accurate it can ever be.
All transformer AI is based on the plausible deniability that they didn't steal a bunch of work and feed it into the machine, even though they did in fact steal a bunch of work and feed it into the machine.
It would be interesting to see a stable diffusion AI built entirely on the art work of those who coded it (which is not to say oh it's obviously going to be bad, because there are plenty of programer artists, but I do wonder what it would produce).
I suppose you could say LLM hallucinations are more akin to bullshitting, although it seems to me that this is still an analogy to a human activity which requires intelligence and intention.
Unfortunately, there's not going to be a lot you can do to avoid anthropomorphizing metaphors given how the entire goal is for it to have output similar to human behavior (and while it certainly isn't there yet, it's far, far beyond something like the ELIZA chatbot).
The training data and reward functions are meant for it to, when prompted for factual data and citations, generate verbatim quotations or paraphrases of factual information in its training data and real citations, but it sometimes instead generates text that is merely superficially similar to factual information and text that has the format of citations without actually referring to real publications, especially when the requested information is not found in its training data.
But it's a lot easier to just say it "hallucinated" the information. But like I said, I would say that "bullshitting" is probably a more accurate description of what's going on most of the time.
You can get decent results by imagining that it's doing improv comedy. It's trying to do an impression of what a thing sounds like.
I have suddenly found myself needing to know more about how this works, and on the one hand, it's honestly sort of terrifying how good it can be sometimes, and on the other hand, when it's wrong, it's just wildly wrong and it doesn't seem to detect this at all. Very weird experience.
__________________ Hear me / and if I close my mind in fear / please pry it open See me / and if my face becomes sincere / beware Hold me / and when I start to come undone / stitch me together Save me / and when you see me strut / remind me of what left this outlaw torn
I continue to find ChatGPT hugely useful, especially compared to google search, when I want a simple, straightforward explanation of something where I can ask qualifying questions, at writing very simple Python scripts that for whatever reason I have never been able to write myself, to generate fancy professional-sounding consulting proposals when the best I can usually come up with is "I will do the thing, pay me".
I continue to find ChatGPT hugely useful, especially compared to google search, when I want a simple, straightforward explanation of something where I can ask qualifying questions, at writing very simple Python scripts that for whatever reason I have never been able to write myself, to generate fancy professional-sounding consulting proposals when the best I can usually come up with is "I will do the thing, pay me".
Has anyone here encountered the "Prove to me that you are human" challenge ChatGPT has recently come up with?
__________________
“Logic is a defined process for going wrong with Confidence and certainty.” —CF Kettering
__________________
"Have no respect whatsoever for authority; forget who said it and instead look what he starts with, where he ends up, and ask yourself, "Is it reasonable?""
__________________
"Have no respect whatsoever for authority; forget who said it and instead look what he starts with, where he ends up, and ask yourself, "Is it reasonable?""
I continue to find ChatGPT hugely useful, especially compared to google search, when I want a simple, straightforward explanation of something where I can ask qualifying questions, at writing very simple Python scripts that for whatever reason I have never been able to write myself, to generate fancy professional-sounding consulting proposals when the best I can usually come up with is "I will do the thing, pay me".
Really, ChatGPT's most relevant daily use is the formatting and structuring of written communication. There are common, structured ways of writing things like business proposals, and those benefit from taking many examples and synthesizing it based on your criteria.
GitHub Copilot is a tremendous time-saver for me, even if I have to check its work. My team takes a very structured, engineering approach to writing code, and it's pretty amazing how quickly it can get through a lot of the basic work based on context. I'm writing unit tests today, and I'm planning to start by putting the requirements in the test and hitting tab.
I guess I'm just grumpy at the thought of heading into a world where those excellent proposals that chatGPT helped write are summarized by chatgpt and a human never sees them. then that human presumes the summary was your proposal, maybe doing their due diligence with a bit of chatGPT powered searches on the ChatGPT written internet to double check facts.
Having AI write anything for you seems to me to be like Using Instant Grits for your Breakfast, or instant Mashed Potatoes for your Dinner party.
It might look somewhat like the real thing, but will likely fail to impress the intended consumer.
__________________
“Logic is a defined process for going wrong with Confidence and certainty.” —CF Kettering
Having AI write anything for you seems to me to be like Using Instant Grits for your Breakfast, or instant Mashed Potatoes for your Dinner party.
It might look somewhat like the real thing, but will likely fail to impress the intended consumer.
I’m repeating myself, but AI as it is now is never the final product. It’s an early step, to get started or to improve the structure. It always needs to be edited. I think your negativity is excessive here.
Having AI write anything for you seems to me to be like Using Instant Grits for your Breakfast, or instant Mashed Potatoes for your Dinner party.
It might look somewhat like the real thing, but will likely fail to impress the intended consumer.
I’m repeating myself, but AI as it is now is never the final product. It’s an early step, to get started or to improve the structure. It always needs to be edited. I think your negativity is excessive here.
You could be right. I hope so.
__________________
“Logic is a defined process for going wrong with Confidence and certainty.” —CF Kettering
Counterpoint: Any casual web search on nearly any topic will reveal that it is, in fact, very often the final product, because that garbage is what's getting published.
Since this includes things like a hospital trying to buy a cookbook-for-diabetics and getting something that includes abstract nouns as ingredients, this is in fact just plain gonna get people killed.
__________________ Hear me / and if I close my mind in fear / please pry it open See me / and if my face becomes sincere / beware Hold me / and when I start to come undone / stitch me together Save me / and when you see me strut / remind me of what left this outlaw torn
It's really lowered the bar to entry. Like the Glaswegian Willy Wonka. This apparently isn't the organizers first time trying this type of half-ass experience scam, but previously what might have been a few hours arranging clipart with watermarks all over them to get the most unaware and tired marks, instead took only a few minutes to produce something that doesn't immediately look cheap in the way we're used to.
I wonder if this is a bit of an age singularity. Where the kids that grow up with GPT generated text and images will be thousands of times more attuned to fishing out 'low effort' because what signifies low effort has changed so much from the days when clip art was literally clipped out of pages.
There's a new movie out this week called, "Late Night with the Devil" which got caught using (obvious) AI generated imagery for some of the commercial break title cards. The director even made a statement about it when social media caught on. This is the kind of thing I don't want to encourage in any production, even a lower budget independent film.
This has me in a bit of a bind, opinions from people I trust say this is a good-to-great movie, with a star making turn for David Dastmaltchian, who I generally like. I feel like just because the production decided to save some money by using mediocre AI output doesn't mean I should punish everyone else who worked on this movie. However, I kind of soured on watching this in the theaters.
This is going to be on Shudder in a month, so there was a high chance I was just going to watch this on streaming anyway...