It’s here – OpenAI has launched the GPT store and I have more questions than answers at this point. AI and disinformation make it to the top of the World Economic Forum’s list of global risks, ahead of climate change and researchers publish a paper on how they have linked brain cells to a computer chip.
These and more top tech stories on the “gosh Toto we’re not in Kansas anymore” edition of Hashtag Trending
I’m your host Jim Love, CIO of IT World Canada and Tech News Day in the US.
OpenAI has unveiled the GPT Store, a new platform for discovering and sharing custom versions of ChatGPT. Since the announcement of GPTs two months ago, over three million custom ChatGPTs have been created.
The GPT Store is accessible to ChatGPT Plus, Team, and Enterprise users, and features a variety of GPTs developed by both partners and the community.
Users can browse through categories like DALL·E, writing, research, programming, education, and lifestyle.
The store will regularly highlight what it says are new and impactful GPTs, with initial offerings including things like a personalized trail recommender from AllTrails, a research tool from Consensus, and a coding tutor from Khan Academy and something called Books for finding – books.
It appears that anyone can save and submit a GPT but with three million already out there, it’s going to be interesting to see how they keep this organized.
They have topics and a top 5 in each topic area, which make you curious as to how these were prioritized.
There is a mention of usage policies and GPT brand guidelines as well as a review system for safety measures. The review system includes both human and automated review. As well, users are able to report GPTs.
There is a comment about not using your conversations with GPTs to improve their models but it’s put there in a way that makes you wonder if only Team and Enterprise customers get this treatment or if Plus members need to pay the extra five bucks a month to get that protection.
And once again, it’s going to be quite interesting to see how they manage the sheer volume.
OpenAI also plans to launch a revenue program for GPT builders based on user engagement.
Sources include: OpenAI Blog
The platform formerly known as Twitter, now referred to as “X,” recently suspended several prominent journalists and leftist figures without explanation. Among those affected were Ken Klippenstein of The Intercept, Steven Monacelli of Texas Observer, podcaster Rob Rousseau, and Alan MacLeod of MintPress News. These suspensions also extended to left-leaning accounts like the TrueAnon podcast and @zei_squirrel, a media-criticizing cartoon squirrel.
The affected users received no communication from X regarding the reason for their suspension, leading to speculation about political motivations behind these actions.
Notably, the suspensions were briefly lifted hours after initial reporting, following public outcry from figures like former British MP George Galloway.
X owner Elon Musk later attributed the bans to a routine spam filter sweep, but this explanation has not been universally accepted. This incident follows previous instances where X has banned reporters critical of Musk.
And curiously, this happened the day after Musk was featured on Canadian national news criticizing the Canadian Prime Minister for his treatment of a right wing journalist.
Sources include: Vice News
Fidelity National Financial, a major player in real estate services, has confirmed a significant data breach. In November, hackers accessed FNF systems, deploying malware and exfiltrating data on 1.3 million customers. The breach caused a week-long outage, severely disrupting the company’s operations and its subsidiaries, leaving customers unable to pay their mortgages. The specific nature of the stolen customer data hasn’t been disclosed, but FNF is offering credit monitoring and identity theft services to the affected individuals, indicating the sensitive nature of the breach.
The ransomware gang ALPHV, also known as BlackCat, claimed responsibility for the attack. They are known for using dark web leak sites to extort victims. The group removed FNF from its site, which sometimes indicates that a ransom has been paid. This incident is part of a recent wave of cyberattacks targeting the mortgage and loan industry. FNF’s response included notifying state attorneys general and regulators, and they have contained the cyberattack since November 26.
Sources include: TechCrunch
The World Economic Forum’s “Global Risks Report 2024” has identified artificial intelligence’s role in election disruption as the top global risk for the year. This concern surpasses climate change, war, and economic instability. The report, a collaboration between the WEF, Marsh McLennan, and Zurich Insurance Group, highlights AI-driven misinformation and disinformation as key factors contributing to societal polarization. Over 1,400 global risk experts, policymakers, and industry leaders contributed to this assessment.
Carolina Klint from Marsh McLennan emphasized the unprecedented influence AI models could have on voter behaviour.
But the report also forecasts a shift in risk focus over the next decade, with extreme weather conditions and significant changes in the political world order becoming more prominent. The WEF calls for global cooperation and the establishment of guardrails against emerging disruptive risks. The report’s release coincides with a critical election year globally, including major polls in the U.S., India, Russia, South Africa, and Mexico. The Eurasia Group’s separate 2024 global risks report also underscores the significance of the U.S. election and the challenges posed by “ungoverned AI.”
Sources include: CNBC
Researchers from Indiana University of Bloomington, the University of Florida, and the University of Cincinnati School of Medicine have made a groundbreaking advancement in AI hardware with the development of “Brainoware,” a human brain on a chip. This innovation, detailed in their paper “Brain Organoid Computing for Artificial Intelligence,” represents a significant leap in biocomputing.
I’ll note that the paper has not been peer reviewed yet, but I went with the story because there have been some experiments establishing the workability of the idea.
The team cultivated specialized stem cells into neuron clusters, or organoids, each less than a nanometer wide. These organoids, connected to a circuit board via electrodes, allow machine-learning algorithms to interpret their responses.
In a practical test, Brainoware reports achieving a 78 per cent accuracy in a speech recognition task, identifying speakers based on the organoid’s neural activity in response to electrical stimulation. While less accurate than traditional AI systems and requiring different resources like a CO2 incubator, Brainoware’s energy efficiency is part of the holy grail. One estimate is that the human brain uses about 20 watts. When we compare that to the reported 8 million watts used by current AI hardware, you can see why researchers say that “organoid intelligence” (OI) is the future of computing, powered by living human brain cells.
And once you get past the creepy factor, the other reason to pursue this may also be to study neurological conditions and cognitive aspects, and offer a new dimension to AI computations and learning.
The author of one article on this says, “while Elon Musk is installing chips inside human brains…researchers are planning to plant brains inside of chips.” And there we are, back to the creepy factor again.
Sources include: Analytics India Magazine
Hashtag Trending goes to air 5 days a week with a special weekend interview show we call “the Weekend Edition.”
You can get us anywhere you get audio podcasts and there is a copy of the show notes at itworldcanada.com/podcasts
I’m your host, Jim Love. Have a Thrilling Thursday!