haruki zaemon

#artificial-intelligence

Texas A&M Professor Wrongly Accuses Class of Cheating With ChatGPT – Rolling Stone

  1. by Simon Harris
  2. May 19, 2023
  3. 1 min

(via The Sizzle)

Texas A&M University–Commerce seniors who have already graduated were denied their diplomas because of an instructor who incorrectly used AI software to detect cheating.

Texas A&M University-Commerce said they were investigating the incident and developing policies related to AI in the classroom. The university denied that anyone had received a failing grade.

I can’t wait to never find out about all the other stuff that’s going on that negatively impact our lives.

(p.s. go subscribe to The Sizzle)


Google "We Have No Moat, And Neither Does OpenAI"

  1. by Simon Harris
  2. May 6, 2023
  3. 2 mins

(via Michael Neale)

A leaked internal document says Google are in a pickle when it comes to AI, and are being “lapped” by public efforts:

Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months. This has profound implications for us.

It would seem that Google’s strength—a monolith with access to vast amounts of data and resources—has become it’s weakness:

Part of what makes LoRA so effective is that it’s stackable. This means that as new and better datasets and tasks become available, the model can be cheaply kept up to date, without ever having to pay the cost of a full run.

By contrast, training giant models from scratch not only throws away the pretraining, but also any iterative improvements that have been made on top. In the open source world, it doesn’t take long before these improvements dominate, making a full retrain extremely costly.

Google are also hampered by trying to attack a generic problem, rather than specific use-cases:

LoRA updates are very cheap to produce (~$100) for the most popular model sizes. This means that almost anyone with an idea can generate one and distribute it. Training times under a day are the norm. At that pace, it doesn’t take long before the cumulative effect of all of these fine-tunings overcomes starting off at a size disadvantage.

These models are used and created by people who are deeply immersed in their particular subgenre, lending a depth of knowledge and empathy we cannot hope to match.


These new tools let you see for yourself how biased AI image models are

  1. by Simon Harris
  2. Mar 26, 2023
  3. 1 min

the models tended to produce images of people that look white and male, especially when asked to depict people in positions of authority.

the models’ output overwhelmingly reflected stereotypical gender biases. Adding adjectives such as “compassionate,” “emotional,” or “sensitive” to a prompt describing a profession will more often make the AI model generate a woman instead of a man. In contrast, specifying the adjectives “stubborn,” “intellectual,” or “unreasonable” will in most cases lead to images of men.

In almost all of the representations of Native Americans, they were wearing traditional headdresses, which obviously isn’t the case in real life.

image-making AI systems tend to depict white nonbinary people as almost identical to each other but produce more variations in the way they depict nonbinary people of other ethnicities.