i'm not even convinced either way on the training-data thing, like, if you had a human read everything on the internet and learn things from it, that's obviously fair use, and it's not instantly obvious that "use it to do this with a different structure of brain" has to be different.
i do think there's a lot of stuff that's just utter garbage being done with these, but at the same time, i think that if you look at the good end of things, and what those models can do, it seems pretty obvious they're actually very useful and potentially valuable -- just not in the ways that naive management consultants want them to be.
__________________ Hear me / and if I close my mind in fear / please pry it open See me / and if my face becomes sincere / beware Hold me / and when I start to come undone / stitch me together Save me / and when you see me strut / remind me of what left this outlaw torn
Mom wouldn't voluntarily talk to me on the phone half the time, and I can't imagine the horrors she'd unleash on me and the AI if I signed her up for this.
I'm using ChatGPT to update my resume. It's pretty great so far. I don't know if I'm a decent "prompt engineer" or whatever, but I'm like: "Here's my last resume. Here's what I have been doing since last it was updated. Here's the description of the job I'm currently interested in. Do your thing!" Not in those exact words, but with about that level of complexity and finesse.
It's pretty great! It's nowhere near perfect, but it uses far less emotional and mental labor than doing it myself. It mostly just looks like boilerplate resume bullshit, but it's customized to me and the job, so what more could you want for a document that people famously read for zero to 10 seconds?
I've read stuff on LinkedIn about how there are going to be tools to detect whether people's resumes are GPT-generated or assisted, and it's like, who cares? I work in an industry that threatens to drum you out if you don't adopt and master this tech immediately (hey, remember crypto?!), so let them figure that out I guess.
claude got shiny new features (plural, actually), and one of them is it can now write, and run, little javascript programs to do math. i asked it what 2^53+1 is because i'm like that, and i was impressed by the answer i got.
__________________ Hear me / and if I close my mind in fear / please pry it open See me / and if my face becomes sincere / beware Hold me / and when I start to come undone / stitch me together Save me / and when you see me strut / remind me of what left this outlaw torn
If you don't want to give this views, it's a video-only clip of Phil Plait talking while the AI narrator drones on about Garfunkel. The clip after it is James Cordon and Bryan Cranston in a Simon and Garfunkel sketch.
In one case from the study cited by AP, when a speaker described "two other girls and one lady," Whisper added fictional text specifying that they "were Black." In another, the audio said, "He, the boy, was going to, I’m not sure exactly, take the umbrella." Whisper transcribed it to, "He took a big piece of a cross, a teeny, small piece ... I’m sure he didn’t have a terror knife so he killed a number of people."
It only took them a year and a half to figure that out?
Amazing
I used to have a friend name Bob (Of course I did. Didn't everyone have a friend named Bob?)
Well Bob had a brother name Ralph. Ralph was one of those guys who could remember EVERYTHING. The only problem was, Ralph really didn't know it all. He just stored all this info in his brain, but he never really figured out what to do with it all. If you asked him a question, he would spin up and regale you with all sorts of information about the object of your question, without ever arriving at a point that could be determined to actually answer the original question.
Chat GPT reminded me of Ralph.
__________________
“Logic is a defined process for going wrong with Confidence and certainty.” —CF Kettering
Pro se litigant files a dog shit* lawsuit against his landlord, trial judge throws it out, plaintiff appeals, plaintiff files an opening brief in the Colorado Court of Appeals drafted by an AI, hilarity ensues.
Quote:
Al-Hamim’s opening brief contains citations to the following fake cases:
[lists 8 100% made-up cases cited in the brief]
After we attempted, without success, to locate these cases, we ordered Al-Hamim to provide complete and unedited copies of the cases, or if the citations were GAI hallucinations, to show cause why he should not be sanctioned for citing fake cases. In his response to our show cause order, Al-Hamim admitted that he relied on AI “to assist his preparation” of his opening brief, confirmed that the citations were hallucinations, and that he “failed to inspect the brief.”
Yes, fake cases in AI-generated legal documents are p. much universally called "hallucinations" these days.
The court of appeals upheld the trial court's dismissal on the merits, and it declined to dismiss the appeal as a sanction for citing fake cases. In that regard, we have what I consider the lulziest part of the story. The landlord was notpro se in this one; it was represented by three lolyers in the same firm, and they were some lazy, incompetent motherfuckers:
Quote:
[I]n their answer brief, the landlords failed to alert this court to the hallucinations in Al-Hamim’s opening brief and did not request an award of attorney fees against Al-Hamim.
That's right. The lolyers in question could not even be fucked to try looking up the cases cited in the plaintiff's brief. gg, dipshits!
* Moar like a cat piss lawsuit, as the case was based largely on alleged cat piss stank coming from a bedroom carpet.
__________________
"We can have democracy in this country, or we can have great wealth concentrated in the hands of a few, but we can't have both." ~ Louis D. Brandeis
"Psychos do not explode when sunlight hits them, I don't give a fuck how crazy they are." ~ S. Gecko