French soccer club Paris Saint-Germain F.C. (PSG) teamed up with its sponsor Crypto.com to begin releasing its new series of matchday NFT posters, co-created using artificial intelligence (AI) tools for fans to collect.
With 1,970 different versions produced, only seven have been selected to be part of the exclusive NFT collection – and will be distributed for free on Crypto.com.
PSG’s partnership with Crypto.com began in 2021, marking the football club’s first official crypto platform sponsor and valued at around $30 million over three years.
The posters, created by generative AI artist Benjamin Benichou, will make their debut during PSG’s upcoming seven matches in the upcoming season. It will kick off on September 24 against Olympique de Marseille in France’s Ligue 1 soccer league.
Paris Saint-Germain continues to explore exciting new avenues in the fields of art and technology. This season the club is working with the artistic director Benjamin Benichou to create a unique collection of match posters using artificial intelligence.https://t.co/Ay8nza7nLK
“Innovation and art are in Paris Saint-Germain’s DNA. We are delighted with this initial experience. The results are amazing. This collaboration with Benjamin Benichou has given us a better understanding of this new tool, which opens up endless creative possibilities, said Paris Saint-Germain Chief Brand Officer Fabien Allègre, in a statement.
PSG will also offer fans the chance to win prints of Benichou’s original poster artwork, autographed jerseys from players, and of course, VIP tickets to attend PSG’s final home match of the season.
The NFT collection will only be available on the Crypto.com NFT Platform and airdropped for free to eligible account holders on match days. Collectors of all seven NFTs can compete for a grand experience at Parc de Princes’ season finale. The first NFT poster will be airdropped on September 24, 2023, for the Marseille match.
It’s worth noting that PSG has ventured into the realm of Web3 previously. In 2020, the club inked a licensing deal with NFT fantasy sports game Sorare. Additionally, the signing of global soccer superstar Lionel Messi in 2021 reportedly involved payments made through Socios crypto fan tokens.
“When you explore the intersecting potential of a brand as inspiring as Paris Saint-Germain and a tool as powerful as artificial intelligence, the possibilities for creation become infinite,” Benichou remarked. “This initiative paves the way for a new territory of artistic and creative expression where art meets technology.”
FEWOCiOUS, whose real name is Victor Langlois, is a crypto artist who has gained popularity for his imaginative and unique art. Affectionately called “Fewo” by his supporters, is one of the NFT space’s first medium-native superstars.
As someone who embraces his transgender identity, Fewo’s artistry has been a powerful medium for self-expression and exploration. From 2021 to now, he’s gained a substantial following and built a devoted community around his art. FewoWorld is a universe dreamt up and created by FEWOCiOUS and the Web3 community. It is also the name of his first generative art project.
What is FEWOWORLD?
FewoWorld began in April 2022 with the launch of Paint. Dropped in partnership with Nifty Gateway, Paint Drops was the first piece of generative art from FEWOCiOUS’s FewoWorld universe.
Paint trailers showed the globular shimmering drops bursting into little Fewos exploring their whimsical world. There are 7,305 Paint Drops (both from the original sale and distributed from the vault to attendees of Paint Parties).
FEWOCiOUS shared with nft now that Paint drops had initially started off as an experiment.
“I went on Twitter spaces and was all over Twitter being like, ‘Hey, I’m making this experiment. And I said, y’all, I want to make weird character things. I don’t know what I’m doing, but it’s gonna be really cool,’” he said. “I thought maybe a thousand, maybe two thousand people would buy it. But there was way more than that. And it’s crazy. It’s so weird. It’s awesome though. I’m very thankful.”
The next chapter of FewoWorld came in the form of canvases. Canvas NFTs are small squares of giant canvases made by everyone at Fewo’s Paint Parties. For each party – the canvases are photographed and digitally broken up into enough squares to airdrop each attendee. The Canvas pieces from the Miami Art Basel 2022 Paint Party are the first to represent clothing items.
Fewo originally started paint parties to give NFT NYC and other NFT conference attendees an alternative activity to partying and drinking.
Painting Party
“Every other party during [NFT conferences] are like clubs where there’s a DJ and everyone’s dancing and drinking and I can’t hear what the other person’s saying. And I was like, what could I do where I could talk to people? I could dance. I could also paint and I could have snacks! And I figured let’s just get a warehouse. Fill it with canvases, everyone put on protective gear and we can run around and paint and talk and do whatever, draw crazy pictures!”
FEWOS
Fewos have been a part of the FewoWorld story since its inception. The little creatures that roam FewoWorld and embody emotion and creativity and look like they’re taken directly from FEWOCiOUS’s art. But Fewo’s have been on a long road creatively – going through several iterations to get them just right.
He shares that as a transgender boy, he started making art and painting because he had nowhere to express his identity verbally.
He observed that many in the space prioritized PFP drops, often at the expense of artistic quality. Believing that art was sometimes an afterthought to financial gains, he was compelled to bring something more meaningful to the table.
“And I thought, well, I’m in this space, what a unique, weird concept, a collection of all these character things, and everyone says that the PFPs are their identity. That’s like a big thing where people base their accounts off of a Punk, the Ape, or whatever. And for me, identity is something I’ve thought about a lot my whole life,” Fewo said. He expressed that being the reason why he felt inspired to create one that represented himself, yet could be embraced by others.
“So I wanted to do this just because it sounded cool, sounded weird, I know how to draw, and I think the Fewos have lots of emotion,” he said. “Some Fewos look sad, some Fewos look angry, some have their face split open in half, and they’re bleeding everywhere. Some have stars and little cloud mouths, lots of different things. But yeah, that’s how that started.”
The Story of Mr. MiSUNDERSTOOD
The community will witness a diverse array of Fewos showcased within the collection, including Mr. MiSUNDERSTOOD.
Mr. MiSUNDERSTOOD, known as the unofficial Fewo “mascot,” he’s appeared in FEWOCiOUS art in different iterations through the years. It made its first appearance in FEWOCiOUS’s Sotheby’s auction at the end of 2021. He remains part of the collection – and the #1 rarest Fewo – and has spawned one of the three types of species, the Misunderstood.
“I made a big seven-foot-tall sculpture of him,” Fewo said, referring to the version of Mr. MiSUNDERSTOOD he made for Sotheby’s. “And then there’s a species called Misunderstoods that to me are just variants of him in a way.”
Mr. MisUNDERSTOOD’s inception started when Fewo was 14 years old and found himself drawing a blue character.
“I would always draw this little blue goopy thing. First I would call him Little Boy Blue, but then Mr. MiSUNDERSTOOD looked cooler and it sounded cool,” Fewo said. “So then I started calling him that. And then I made a big sculpture of him, and then now he’s just in my heart forever.”
From 2D to 3D
The first iterations of Fewos conceptual art were simple and 2D – the easiest path to doing a large-scale generative collection – little room to run into overlap issues, shadow issues, you name it. But – as you can see – the first few iterations didn’t capture the real essence and style of FEWOCiOUS art.
Eventually, Fewo and his lead animator and 3D designer Logan – came up with a “Mr. Potatohead” type of system for 3D characters. This allowed them to plan for “sockets” for each trait to keep them from overlapping and having the normal design issues.
“The problem was because so many projects are 2D, you can just do layers. And if you could just do layers, it doesn’t matter if things collide,” Fewo explained.
“But in 3D, if this heart eye is really big, and if that collides with the cheek, it will look bad. Because it’s three-dimensional, and it’ll just glitch badly. Or this hair, or this head top, that can collide with this ear, and that’ll look bad. It’s a big problem, so we made this socket system where we just drew little circles arbitrarily on the face and on the head of this is roughly where an eye should go and it can’t go further than this”
Fewo and Logan made invisible guidelines to make sure nothing ever collided.
“It took us a really long time and it was really hard. Yeah, and it’s hard too, because my art is so abstract that I draw things colliding all the time. I’ll draw a nose halfway inside of an eye, but you know, artfully, it stops colliding at some point.” Fewo said.
Types of Species
After debating between several species ideas – (sketch, abstract, humanoid, doodle, misunderstood, Frankenstein, frame, etc) 3 remaining. The three primary species of Fewos are:
Frankensteins
Humanoids
Misunderstoods
The other most important piece of this project – was allowing these characters to have some real life behind them. Movement, full bodies, personalities, style. Every Fewo is sculpted as a full-body creature and rigged in Blender so it can be easily animated. Fewos can also hold items and wear clothing via the Dressing Room equipping and unequipping function. Any changes made to a Fewo in the Dressing Room can visually alter the actual NFT.
How to mint a FEWO
Fewos will have a supply of up to 20,000 minted over the week between September 25th and 29th. Below you can find a more comprehensive breakdown and timeline for the FEWO’s minting process from start to finish.
In a commendable move, Fewo wanted to ensure that his long-term supporters were rewarded. As such, he’s made a significant gesture to make Fewos free for Paint holders. This move underscores Fewo’s commitment to his community and the value he places on loyalty.
“Of course, I’m thankful that they have my art, but this FewoWorld, extravaganza exploration is just, you know, I didn’t know who would want to come along with me,” he said. “I’m very grateful that a lot of people did want to go on this journey with me. And so we spent a year making these things, and I was just like, with Paint, you already believed in me, thank you so much. Here’s a piece of what I’ve been working on for so long.”
“I’m very grateful that a lot of people did want to go on this journey with me. And so we spent a year making these things and I was just like, you know what, you already spent money with the Paint, you already believe in me, thank you so much here. Here’s a piece of what I’ve been working on for so long.”
fewo
Fewos is also introducing a new feature: the Dressing Room. Here, owners can effortlessly equip or unequip items, offering a dynamic way to personalize their Fewo. While in the Dressing Room, a standing view of the Fewo is available, allowing for a comprehensive look at how the character is adorned.
In the vibrant world of FewoWorld, Flowers serve a dual purpose. Not only do they act as Mint Passes for upcoming digital wearables and holdable accessories, but unredeemed Flowers can also be equipped directly as holdable items. Furthermore, Paint Drops provide a unique twist; when equipped as holdable items, they have the power to transform a Fewo’s DNA. This results in the alteration of the Fewo’s organ or bone color to mirror that of the Paint Drop. On the other hand, Canvas NFTs offer aesthetic versatility and can be used as wearable chains.
The Dress Room debut, and Flowers “Blooming” event are set to launch in November and December with specific dates and times soon to be announced.
Pricing Mechanics
The FEWOCiOUS Art Holder Mint is capped at a price of 0.125 ETH, while the Public Mint has a ceiling of 0.4 ETH. The final pricing will be calculated based on the remaining Fewos after accounting for the Paint Holder Airdrop and the Paint Holder Flower Exchange & Mint. It’s estimated that up to 14,000 Fewos, out of the total 20,000 supply, will be minted after the Paint Holder events. For every deficit of 500 Fewos from this 14,000 projection, the prices for the FEWOCiOUS Art Holder Mint and Public Mint will decrease by 5% for the remaining Fewos.
For example, after the Paint Holder events, if 13.5k Fewos are minted/airdropped, prices will drop by 5%. At 13k Fewos, it’s a 10% reduction, and for 12.5k, it’s a 15% decrease, continuing in this pattern. The finalized prices and remaining supply will be announced soon after the Paint Holder Flower Exchange & Mint concludes.
What’s next for FEWOWORLD?
“Ohhh wowww! Ahh!!! I’m 20 right now and i feel like making Fewos has taught me a lot about working in a team & making art with multiple people and communicating my visions to someone outside of my Photoshop canvas at 4 am lol. That’s the biggest skill I’ve learned this year, along with learning lots and lots of Blender 3d hahaha.” FEWOCiOUS shared as he alluded to the future of his art and FEWOWORLD.
As FEWOCiOUS continues to pioneer the intersection of art, identity, and technology through FewoWorld, the crypto art community eagerly anticipates the next chapter of this vibrant universe. The dedication to rewarding long-standing supporters and the continuous introduction of features and interactive drops emphasize a focus on user experience and engagement. Whether it’s new species, interactive experiences, or deeper layers of customization, one thing is clear: FewoWorld is just getting started.
“Next, I wanna make clothes, Bags, everythiiiiiiiing!!! More sculptures and lil figurines, more paintings of course. ALWAYS!!!I dont know….cups even !! If you think about it, if I’m only 20 rn and i die at 100 or something, I still have 80% more of my life to live and make stuff. This is only the beginning & i feel like each day im getting more and more brain knowledge to make more cool stuff with amazing people!!!”
The Art and Spirituality Panel: A Conversation with Krista Kim panel, hosted by Matt Medved, commenced with a discussion on Kim’s featured The Gateway.
She discusses her piece “Resonance,” a digital video of a slowly revolving rough diamond that guides viewers on an 8-minute journey of self-discovery, inviting them to tap into their inner light and peace in spite of any challenges they may face. She explains the meaning behind the piece and why she views people as precious diamonds.
“We are formed under pressure, and there’s a lot of hardship in life as a human being; life is ups and downs,” Kim says to Medved. “But I think the more resilient we are under pressure, we can overcome difficult circumstances, and we can become beautiful diamonds.”
Medved and Kim then pondered on the profound influence of digital identity on her artistry.
Kim responded by asserting, “Data is power, data is oil; it’s a form of property.” The conversation then shifted towards the impact of blockchain technology on art ownership, and both Kim and Medved concurred that the need for verification on the blockchain has significantly transformed the art ownership landscape.
Credit: nft now
Medved added that “the pursuit of truth” is a central theme in this context. Kim and Medved acknowledged the seismic shift in the art ownership landscape, where the need for verification on the blockchain had ushered in a new era of trust and accountability. This technological innovation, they contended, not only empowered artists and collectors but also reshaped the very foundations of the art market itself.
In an era marked by new and emerging technologies, the need for transparency, provenance, and authentication, the blockchain’s immutable ledger is more essential than ever. They discussed its implications for Web3, ultimately leading to a discussion about AI. They explored both the potential benefits and drawbacks of AI.
Medved emphasized the importance of embracing technology while also managing its associated risks.
“It’s about embracing the technology while mitigating potential downsides or dangers it can create,” Medved said.
He inquired about Kim’s evolving relationship with AI, acknowledging the creative tension that artists often experience when incorporating AI into their creative workflows due to the potential hazards it presents.
Kim revealed that she had only recently begun using AI in the past six months but says it has been helpful in bringing visions already in her head to life.
Kim draws an analogy between AI artists and the role of a director in a film. “AI is a tool, but the creator has to know what they want. they have to prompt engineer what they want to create,” she said. “You still need to have a concept, a philosophy, a vision. it’s a collaborative process”
Matt and Kim then delved into the pressing issue of the mental health crisis within the Web3 space, discussing the relentless 24/7 pressure and anxiety that artists, founders, and builders often endure.
Kim recommended that an effective method for alleviating anxiety involves abstaining from comparing oneself to others. She elaborated on the profound importance of meditation in her personal journey, highlighting how it has been instrumental in overcoming mental challenges such as depression and anxiety. In a moment of vulnerability, she emphasized that her art plays a crucial role in her self-healing journey, stemming from her efforts to confront and heal from past traumas.
Medved wrapped up the conversation by making an exciting announcement to the audience.
First Tokenized Podcast Episode
Credit: nft now
“We’re very excited to announce that we’re going to be tokenizing our very podcast episode and airdropping it to all Now Pass holders,” he said, facing Kim. “And the subject is none other than yourself.”
As he spoke, the large screen behind him transitioned to display an image of Kim, showcasing her presence on the NFT Now Podcast. The audience erupted in cheers.
We are excited to announce our first tokenized episode of the nft now podcast with @Krista_Kim — announced live at The Gateway Korea and exclusively airdropped to all @thenowpass holders! pic.twitter.com/0OU5yfec7J
As well as commemorating a landmark moment in The Gateway’s global expansion and our first piece of tokenized media, this airdrop will unlock exciting rewards with the Now Network Member Portal’s launch later this year.
Along with airdropping the first tokenized nft now podcast, the company founders also announced the expansion to Now Media on October 10.
Throughout the past year, nft now has been at the forefront of exploring the future of tokenized media, and this exciting announcement marks the next significant step in its innovative journey. As we leave this panel enriched by Kim’s wisdom and inspired by the limitless possibilities of art and technology, it’s evident that the ever-evolving intersection of creativity and digital identity will continue to shape our world in profound and transformative ways.
The Gateway: Korea just received an unlikely addition to the artist lineup: Keith Haring.
Today (Sept. 6), Christie’s announced an exclusive auction, “Keith Haring: Pixel Pioneer,” featuring five rare digital drawings from the renowned artist Keith Haring, each crafted on an Amiga computer during the 1980s.
The posthumous online auction will open from Sept. 12-20, with a world-premiere exhibition at The Gateway: Korea in Seoul, Korea, from Sept. 6-8, followed by an exhibition at Christie’s New York from Sept. 14–19.
According to the Christie’s auction site, estimates for each piece run between $200,000 to $500,000.
Thrilled to FINALLY announce Keith Haring: Pixel Pioneer. A sale over a year in the making @ChristiesInc with @KeithHaringFdn – 5 unique digital drawings Keith did on an Amiga computer in the 80s! All on chain, all on view in Seoul (now) and NY next week! pic.twitter.com/IicurLSWKf
The legacy of Keith Haring, often juxtaposed with street and gallery art, remains unquestionable. In the 1980s, he magnificently blurred lines between contrasting artistic spheres, marking a significant shift in art’s outreach to a broader audience. Haring’s distinct ability to bridge the realms of traditional physical art with the emerging digital era resonates especially today, in a world quickly embracing Web3.
Haring’s fascination with the digital isn’t unknown; he was amongst the early adopters, much akin to his mentor and legendary artist, Andy Warhol. A vivid representation of this is Haring’s portrayal of the Apple Macintosh computer in some of his works. His Amiga drawings testify to his brilliance, seamlessly incorporating his signature bold lines and pop color aesthetics. They mark the beginning of a new medium that dominates commercial design and 21st-century digital art.
The sale aims to follow in the footsteps of the auction house’s highly successful Warhol sale in 2021, in which recovered digital works by the iconic pop artist sold for $3.8 million.
Nicole Sales Giles, Christie’s Vice President and Director of Digital Art Sales, says, “It has been an honor to work with the Keith Haring Foundation on this project. Haring’s work embodied an era where art crept outside the traditional gallery walls and into the streets. I believe that as an early adopter of the digital age and as a strong proponent of bridging art and mass culture, Haring would have been at the forefront of the Web3 community. With an aesthetic that translates naturally to the digital medium, Haring’s Amiga drawings will be coveted additions to all best-in-class contemporary art collections.”
The venture has been realized in partnership with the Keith Haring Studio, a subsidiary of the Keith Haring Foundation. The foundation tirelessly works to preserve and amplify Haring’s art and principles, actively backing nonprofits that cater to children and AIDS-related education, prevention, and care.
“The minting of five natively digital masterpieces by Keith Haring created on the Commodore Amiga computer in 1987 as NFTs by Christie’s auction house carries an immense significance in digital art and NFTs,” adds Gil Vazquez, Executive Director and President of the Keith Haring Foundation. “This collection by Haring represents a pivotal moment in identifying Haring as a pioneer in the digital art space and a groundbreaking convergence of art and technology. Haring’s distinct visual language, characterized by his iconic motifs and energetic lines, translates seamlessly into the digital art realm, underscoring the lasting relevance of his artistic legacy. These works honor that legacy, reaffirming the enduring significance of Haring’s art, and emphasize the limitless possibilities of art’s digital future.”
To bring this monumental initiative to life, Christie’s collaborated with Artestar, a global licensing agency and creative consultancy.
As the boundaries between traditional and digital art continue to blur, this auction is poised to be a landmark event, further solidifying Keith Haring’s enduring legacy in the global art narrative.
Editor’s note: This article was written by an nft now staff member in collaboration with OpenAI’s GPT-4.
Let’s face it: The NFT space moves really fast. Considering how quickly things can change in the metaverse, a week in NFTs might as well be a month IRL.
Don’t get us wrong — the more people onboarded into the space, the merrier. But because of the constant influx of great art and ideas, it’s becoming increasingly difficult to keep up with all the news, launches, and general happenings.
Well, you can put the days of endless Twitter and Discord scrolling behind you as we pull together a weekly list of upcoming NFT drops you definitely don’t want to miss. Here’s what to look out for this week.
Why: This collection by Devant echoes his personal struggle of combating the relentless battle to regain control over one’s body. Devant openly shares his hospital visits on Twitter, utilizing art as a means to chronicle his inner emotions and find cathartic release. Mint price is 0.009 ETH, and the limit is 65,535 per wallet.
Why: “DreamCatcher’s Quest” by Zwist emerges as a limited edition artwork that draws its creative wellspring from previously crafted landscape compositions. This artwork portrays an expansive skyscape adorned with ethereal butterfly clouds, with each delicate specimen symbolizing dreams, aspirations, and hopes. Commanding the artwork’s center stage is a resolute and diminutive Dino character, equipped with a meticulously crafted net, fervently chasing the prized butterflies. By minting a token, you actively support an independent artist and obtain an exclusive edition of this artwork.
Why: “The Feeders” introduces a collection of AI-generated artworks, born from Y Griega’s prior cryptoart pieces. “This assortment showcases my affinity for blending experimentation and artistry,” Griega said. “Just as I enjoy playing with audio feed fx and resampling in music, I’ve fed with my own art AI’s capabilities to craft a selection of entrancing pieces.” The series, called ‘The Feeders’ mixes AI and creativity.
Agoria & The Sandbox Collaboration
Credit: Agoria/The Sandbox
Who: Sébastien Devaud, better known to the Web3 space as Agoria, a multi-disciplinary artist.
What: 99 pieces for 0.015 ETH
When: August 29 10 a.m. (mint date), September 7-21 holders can complete quests to win rewards from a prize pool of 100k SAND.
Why: Agoria is now expanding even further into the digital realm through a partnership with The Sandbox that combines a variety of his worlds into one, including digital art, physical experiences, and his love for music. It is also “the first-ever Web3 avatar collection that brings the deeper meaning of duality to life through dynamic appearances,” Every six hours, the avatar transforms, aligning itself to the day or night. Read nft now’s exclusive on the collaboration here.
What: Genesis Mint comprising 4,000 District Home NFTs usable inside the ZTX platform.
When: OpenSea Mint (August 30)
Where: Arbitrum
Why: If you didn’t get pre-sale, don’t worry you can still mint public sale. The Genesis Mint by ZTX will kick off the development of the ZTX NFT ecosystem, allowing early supporters to collect one of the first-ever ZTX NFTs – District Homes. These in-game assets not only give a boost to your gameplay, but will also serve as VIP passes for future releases.
Today, nft now unveils the activations and partners for The Gateway: Korea launch during Korea Blockchain Week from September 6th to 8th, alongside FACTBLOCK.
Featuring immersive installations from Christie’s, adidas /// Studio, and Beatport.io, and Polkadot, The Gateway: Korea will be hosted in partnership with LG Electronics, ARC and I’m eco. A VIP preview will kick off the event followed by a concert curated by Beatport.io. Brand new initiatives from multiple partners will be revealed throughout the three days.
The three-day extravaganza, coinciding with Frieze Seoul, promises an immersive audiovisual gallery featuring a constellation of leading digital artists, including renowned names such as Beeple, Claire Silver, and Krista Kim.
Furthermore, attendees will get a chance to dive deep into the blockchain world, with fireside chats and keynote speeches by industry heavyweights such as 9GAG’s Ray Chan, Lady Pheønix, ThankYouX, and Stacey King from Adidas /// Studio.
The upcoming event will be the third iteration of The Gateway series. Following the massive success of the five-day festival during Miami Basel last year, which witnessed 12,000 attendees and collaborations with brand behemoths like Instagram (Meta), RTFKT, Porsche, and MetaMask, this year’s The Gateway: Korea aims to underscore the rising significance of digital assets in art history.
“The Gateway: Korea, co-hosted by FactBlock and nft now, will showcase world-class digital artists on the Korean stage,” said Seonik Jeon, CEO of FactBlock, “bridging Korean culture and Web3 technology for a brighter future.”
Co-hosting The Gateway: Korea, Matt Medved, nft now’s CEO, expressed his enthusiasm, stating, “Korea is witnessing a groundbreaking moment of global cultural intersection. Our event aims to salute this blend by uniting the leading creators and pioneers at the helm of web3 for a one-of-a-kind experience that seamlessly merges the physical with the digital and acts as a bridge between the East and West.”
We’re excited to announce our amazing partners for The Gateway: Korea alongside @FACTBLOCK
The Korea Blockchain Week, now in its sixth year, is a cornerstone event for blockchain and web3 enthusiasts. Over six eventful days, the much-awaited KBW2023 will tell the riveting narrative of blockchain and web3. The week’s highlights include IMPACT, an essential conference hosted by FACTBLOCK and Hashed, and Seoulbound, a two-day EDM and visual art music fest.
To drive the global conversation around web3’s future, FACTBLOCK has been a pivotal force in establishing web3 communities, thus playing a central role in the web2 to web3 transition of numerous partners.
Stacey King, representing adidas /// Studio – a unique division focusing on global Web3 campaigns within the iconic sports brand- voiced her excitement: “Gateway Korea provides the impeccable timing for adidas to connect with our Community in Asia. We’re also thrilled to unveil a global debut for the Three Stripes in this domain.”
Likewise, Christie’s, the world-renowned art business that spearheaded the NFT trend in the global auction realm, is equally eager to showcase their collaboration with nft now in Asia.
“We are very excited to collaborate with nft now for the third time – and for the first time in Asia – alongside both Korea Blockchain Week and Frieze Seoul. This is a perfect activation for us, as we continue to grow our Digital Art offerings to a global client base. We are particularly excited to launch a very cool project – on view for the first time at The Gateway. You’ll have to stop by to see it!”
The event will be housed at S-Factory, Seoul, Seongdong-gu, Yeonmujang 15-gil, 11 에스펙토리 3층 301호, and promises a blend of technological advancements and creative energies, reiterating the dynamic evolution of art in the digital era.
Political campaign ads and donor solicitations have long been deceptive. In 2004, for example, U.S. presidential candidate John Kerry, a Democrat, aired an ad stating that Republican opponent George W. Bush “says sending jobs overseas ‘makes sense’ for America.”
How The Conversation is different: Accurate science, none of the jargon
Campaign fundraising solicitations are also rife with deception. An analysis of 317,366 political emails sent during the 2020 election in the U.S. found that deception was the norm. For example, a campaign manipulates recipients into opening the emails by lying about the sender’s identity and using subject lines that trick the recipient into thinking the sender is replying to the donor, or claims the email is “NOT asking for money” but then asks for money. Both Republicans and Democrats do it.
Campaigns are now rapidly embracing artificial intelligence for composing and producing ads and donor solicitations. The results are impressive: Democratic campaigns found that donor letters written by AI were more effective than letters written by humans at writing personalized text that persuades recipients to click and send donations.
A pro-Ron DeSantis super PAC featured an AI-generated imitation of Donald Trump’s voice in this ad.
And AI has benefits for democracy, such as helping staffers organize their emails from constituents or helping government officials summarize testimony.
Here are six things to look out for. I base this list on my own experiments testing the effects of political deception. I hope that voters can be equipped with what to expect and what to watch out for, and learn to be more skeptical, as the U.S. heads into the next presidential campaign.
Bogus custom campaign promises
My research on the 2020 presidential election revealed that the choice voters made between Biden and Trump was driven by their perceptions of which candidate “proposes realistic solutions to problems” and “says out loud what I am thinking,” based on 75 items in a survey. These are two of the most important qualities for a candidate to have to project a presidential image and win.
AI chatbots, such as ChatGPT by OpenAI, Bing Chat by Microsoft, and Bard by Google, could be used by politicians to generate customized campaign promises deceptively microtargeting voters and donors.
Currently, when people scroll through news feeds, the articles are logged in their computer history, which are tracked by sites such as Facebook. The user is tagged as liberal or conservative, and also tagged as holding certain interests. Political campaigns can place an ad spot in real time on the person’s feed with a customized title.
Campaigns can use AI to develop a repository of articles written in different styles making different campaign promises. Campaigns could then embed an AI algorithm in the process – courtesy of automated commands already plugged in by the campaign – to generate bogus tailored campaign promises at the end of the ad posing as a news article or donor solicitation.
ChatGPT, for instance, could hypothetically be prompted to add material based on text from the last articles that the voter was reading online. The voter then scrolls down and reads the candidate promising exactly what the voter wants to see, word for word, in a tailored tone. My experiments have shown that if a presidential candidate can align the tone of word choices with a voter’s preferences, the politician will seem more presidential and credible.
Exploiting the tendency to believe one another
Humans tend to automatically believe what they are told. They have what scholars call a “truth-default.” They even fall prey to seemingly implausiblelies.
In my experiments I found that people who are exposed to a presidential candidate’s deceptive messaging believe the untrue statements. Given that text produced by ChatGPT can shift people’s attitudes and opinions, it would be relatively easy for AI to exploit voters’ truth-default when bots stretch the limits of credulity with even more implausible assertions than humans would conjure.
A New York Times columnist had a lengthy chat with Microsoft’s Bing chatbot. Eventually, the bot tried to get him to leave his wife. “Sydney” told the reporter repeatedly “I’m in love with you,” and “You’re married, but you don’t love your spouse … you love me. … Actually you want to be with me.”
Imagine millions of these sorts of encounters, but with a bot trying to ply voters to leave their candidate for another.
AI chatbots can exhibit partisan bias. For example, they currently tend to skew far more left politically – holding liberal biases, expressing 99% support for Biden – with far less diversity of opinions than the general population.
In 2024, Republicans and Democrats will have the opportunity to fine-tune models that inject political bias and even chat with voters to sway them.
In 2004, a campaign ad for Democratic presidential candidate John Kerry, left, lied about his opponent, Republican George W. Bush, right. Bush’s campaign lied about Kerry, too. AP Photo/Wilfredo Lee
Manipulating candidate photos
AI can change images. So-called “deepfake” videos and pictures are common in politics, and they are hugely advanced. Donald Trump has used AI to create a fake photo of himself down on one knee, praying.
Photos can be tailored more precisely to influence voters more subtly. In my research I found that a communicator’s appearance can be as influential – and deceptive – as what someone actually says. My research also revealed that Trump was perceived as “presidential” in the 2020 election when voters thought he seemed “sincere.” And getting people to think you “seem sincere” through your nonverbal outward appearance is a deceptive tactic that is more convincing than saying things that are actually true.
Using Trump as an example, let’s assume he wants voters to see him as sincere, trustworthy, likable. Certain alterable features of his appearance make him look insincere, untrustworthy and unlikable: He bares his lower teeth when he speaks and rarelysmiles, which makes him look threatening.
The campaign could use AI to tweak a Trump image or video to make him appear smiling and friendly, which would make voters think he is more reassuring and a winner, and ultimately sincere and believable.
Evading blame
AI provides campaigns with added deniability when they mess up. Typically, if politicians get in trouble they blame their staff. If staffers get in trouble they blame the intern. If interns get in trouble they can now blame ChatGPT.
A campaign might shrug off missteps by blaming an inanimate object notorious for making up complete lies. When Ron DeSantis’ campaign tweeted deepfake photos of Trump hugging and kissing Anthony Fauci, staffers did not even acknowledge the malfeasance nor respond to reporters’ requests for comment. No human needed to, it appears, if a robot could hypothetically take the fall.
Not all of AI’s contributions to politics are potentially harmful. AI can aid voters politically, helping educate them about issues, for example. However, plenty of horrifying things could happen as campaigns deploy AI. I hope these six points will help you prepare for, and avoid, deception in ads and donor solicitations.
Generative AI tools such as Midjourney, Stable Diffusion, and DALL-E 2 have astounded us with their ability to produce remarkable images in a matter of seconds.
Despite their achievements, however, there remains a puzzling disparity between what AI image generators can produce and what we can. For instance, these tools often won’t deliver satisfactory results for seemingly simple tasks such as counting objects and producing accurate text.
If generative AI has reached such unprecedented heights in creative expression, why does it struggle with tasks even a primary school student could complete?
Exploring the underlying reasons helps sheds light on the complex numerical nature of AI, and the nuance of its capabilities.
AI’s limitations with writing
Humans can easily recognize text symbols (such as letters, numbers, and characters) written in various different fonts and handwriting. We can also produce text in different contexts, and understand how context can change meaning.
Current AI image generators lack this inherent understanding. They have no true comprehension of what text symbols mean. These generators are built on artificial neural networks trained on massive amounts of image data, from which they “learn” associations and make predictions.
Combinations of shapes in the training images are associated with various entities. For example, two inward-facing lines that meet might represent the tip of a pencil or the roof of a house.
But when it comes to text and quantities, the associations must be incredibly accurate, since even minor imperfections are noticeable. Our brains can overlook slight deviations in a pencil’s tip or a roof – but not as much when it comes to how a word is written, or the number of fingers on a hand.
As far as text-to-image models are concerned, text symbols are just combinations of lines and shapes. Since text comes in so many different styles – and since letters and numbers are used in seemingly endless arrangements – the model often won’t learn how to effectively reproduce text.
AI-generated image produced in response to the prompt ‘KFC logo.’ | Credit: The Conversation
The main reason for this is insufficient training data. AI image generators require much more training data to accurately represent text and quantities than they do for other tasks.
The tragedy of AI hands
Issues also arise when dealing with smaller objects that require intricate details, such as hands.
Two AI-generated images produced in response to the prompt ‘young girl holding up ten fingers, realistic.’ | Credit: The Conversation
In training images, hands are often small, holding objects, or partially obscured by other elements. It becomes challenging for AI to associate the term “hand” with the exact representation of a human hand with five fingers.
Consequently, AI-generated hands often look misshapen, have additional or fewer fingers, or have hands partially covered by objects such as sleeves or purses.
We see a similar issue when it comes to quantities. AI models lack a clear understanding of quantities, such as the abstract concept of “four.” As such, an image generator may respond to a prompt for “four apples” by drawing on learning from myriad images featuring many quantities of apples – and return an output with the incorrect amount.
In other words, the huge diversity of associations within the training data impacts the accuracy of quantities in outputs.
Three AI-generated images produced in response to the prompt ‘5 soda cans on a table.’ | Credit: The Conversation
Will AI ever be able to write and count?
It’s important to remember text-to-image and text-to-video conversion is a relatively new concept in AI. Current generative platforms are “low-resolution” versions of what we can expect in the future.
With advancements being made in training processes and AI technology, future AI image generators will likely be much more capable of producing accurate visualizations.
It’s also worth noting most publicly accessible AI platforms don’t offer the highest level of capability. Generating accurate text and quantities demands highly optimized and tailored networks, so paid subscriptions to more advanced platforms will likely deliver better results.
ChatGPT has exploded in popularity, and people are using it to write articles and essays, generate marketing copy and computer code, or simply as a learning or research tool. However, most people don’t understand how it works or what it can do, so they are either not happy with its results or not using it in a way that can draw out its best capabilities.
I’m a human factors engineer. A core principle in my field is “never blame the user.” Unfortunately, the ChatGPT search-box interface elicits the wrong mental model and leads users to believe that entering a simple question should lead to a comprehensive result, but that’s not how ChatGPT works.
Unlike a search engine, with static and stored results, ChatGPT never copies, retrieves, or looks up information from anywhere. Rather, it generates every word anew. You send it a prompt, and based on its machine-learning training on massive amounts of text, it creates an original answer.
Most importantly, each chat retains context during a conversation, meaning that questions asked and answers provided earlier in the conversation will inform responses it generates later. The answers, therefore, are malleable, and the user needs to participate in an iterative process to shape them into something useful.
Your mental model of a machine — how you conceive of it — is important for using it effectively. To understand how to shape a productive session with ChatGPT, think of it as a glider that takes you on journeys through knowledge and possibilities.
Dimensions of knowledge
You can begin by thinking of a specific dimension or space in a topic that intrigues you. If the topic were chocolate, for example, you might ask it to write a tragic love story about Hershey’s Kisses. The glider has been trained on essentially everything ever written about Kisses, and similarly it “knows” how to glide through all kinds of story spaces — so it will confidently take you on a flight through Hershey’s Kisses space to produce the desired story.
You might instead ask it to explain five ways in which chocolate is healthy and give the response in the style of Dr. Seuss. Your requests will launch the glider through different knowledge spaces — chocolate and health — toward a different destination — a story in a specific style.
To unlock ChatGPT’s full potential, you can learn to fly the glider through “transversal” spaces – areas that cross multiple domains of knowledge. By guiding it through these domains, ChatGPT will learn both the scope and angle of your interest and will begin to adjust its response to provide better answers.
For example, consider this prompt: “Can you give me advice on getting healthy?” In that query, ChatGPT does not know who the “you” is, who “me” is, or what you mean by “getting healthy.”
Instead, try this: “Pretend you are a medical doctor, a nutritionist, and a personal coach. Prepare a two-week food and exercise plan for a 56-year-old man to increase heart health.” With this, you have given the glider a more specific flight plan spanning areas of medicine, nutrition, and motivation.
If you want something more precise, then you can activate a few more dimensions. For example, add in: “And I want to lose some weight and build muscle, and I want to spend 20 minutes a day on exercise, and I cannot do pull-ups and I hate tofu.” ChatGPT will provide output taking into account all of your activated dimensions. Each dimension can be presented together or in sequence.
Flight plan
The dimensions you add through prompts can be informed by answers ChatGPT has given along the way. Here’s an example: “Pretend you are an expert in cancer, nutrition, and behavior change. Propose eight behavior-change interventions to reduce cancer rates in rural communities.” ChatGPT will dutifully present eight interventions.
Let’s say three of the ideas look the most promising. You can follow up with a prompt to encourage more details and start putting it in a format that could be used for public messaging: “Combine concepts from ideas four, six, and seven to create four new possibilities — give each a tagline, and outline the details.” Now let’s say intervention two seems promising. You can prompt ChatGPT to make it even better: “Offer six critiques of intervention two and then redesign it to address the critiques.”
ChatGPT does better if you first focus on and highlight dimensions you think are particularly important. For example, if you really care about the behavior-change aspect of the rural cancer rates scenario, you could force ChatGPT to get more nuanced and add more weight and depth to that dimension before you go down the path of interventions.
You could do this by first prompting: “Classify behavior-change techniques into six named categories. Within each, describe three approaches and name two important researchers in the category.” This will better activate the behavior-change dimension, letting ChatGPT incorporate this knowledge in subsequent explorations.
There are many categories of prompt elements you can include to activate dimensions of interest. One is domains, like “machine learning approaches.” Another is expertise, like “respond as an economist with Marxist leanings.” And another is output style, like “write it as an essay for The Economist.” You can also specify audiences, like “create and describe five clusters of our customer types and write a product description targeted to each one.”
Explorations, not answers
By rejecting the search engine metaphor and instead embracing a transdimensional glider metaphor, you can better understand how ChatGPT works and navigate more effectively toward valuable insights.
The interaction with ChatGPT is best performed not as a simple or undirected question-and-answer session, but as an interactive conversation that progressively builds knowledge for both the user and the chatbot. The more information you provide to it about your interests, and the more feedback it gets on its responses, the better its answers and suggestions. The richer the journey, the richer the destination.
It is important, however, to use the information provided appropriately. The facts, details, and references ChatGPT presents are not taken from verified sources. They are conjured based on its training on a vast but non-curated set of data. ChatGPT will generate a medical diagnosis the same way it writes a Harry Potter story, which is to say it is a bit of an improviser.
You should always critically evaluate the specific information it provides and consider its output as explorations and suggestions rather than as hard facts. Treat its content as imaginative conjectures that require further verification, analysis, and filtering by you, the human pilot.
From fake photos of Donald Trump being arrested by New York City police officers to a chatbot describing a very-much-alive computer scientist as having died tragically, the ability of the new generation of generative artificial intelligence systems to create convincing but fictional text and images is setting off alarms about fraud and misinformation on steroids. Indeed, a group of artificial intelligence researchers and industry figures urged the industry on March 22, 2023, to pause further training of the latest AI technologies or, barring that, for governments to “impose a moratorium.”
These technologies – image generators like DALL-E, Midjourney and Stable Diffusion, and text generators like Bard, ChatGPT, Chinchilla and LLaMA – are now available to millions of people and don’t require technical knowledge to use.
Given the potential for widespread harm as technology companies roll out these AI systems and test them on the public, policymakers are faced with the task of determining whether and how to regulate the emerging technology. The Conversation asked three experts on technology policy to explain why regulating AI is such a challenge – and why it’s so important to get it right.
Human foibles and a moving target
S. Shyam Sundar
The reason to regulate AI is not because the technology is out of control, but because human imagination is out of proportion. Gushing media coverage has fueled irrational beliefs about AI’s abilities and consciousness. Such beliefs build on “automation bias” or the tendency to let your guard down when machines are performing a task. An example is reduced vigilance among pilots when their aircraft is flying on autopilot.
Numerous studies in my lab have shown that when a machine, rather than a human, is identified as a source of interaction, it triggers a mental shortcut in the minds of users that we call a “machine heuristic.” This shortcut is the belief that machines are accurate, objective, unbiased, infallible, and so on. It clouds the user’s judgment and results in the user overly trusting machines. However, simply disabusing people of AI’s infallibility is not sufficient, because humans are known to unconsciously assume competence even when the technology doesn’t warrant it.
Research has also shown that people treat computers as social beings when the machines show even the slightest hint of humanness, such as the use of conversational language. In these cases, people apply social rules of human interaction, such as politeness and reciprocity. So, when computers seem sentient, people tend to trust them, blindly. Regulation is needed to ensure that AI products deserve this trust and don’t exploit it.
AI poses a unique challenge because, unlike in traditional engineering systems, designers cannot be sure how AI systems will behave. When a traditional automobile was shipped out of the factory, engineers knew exactly how it would function. But with self-driving cars, the engineers can never be sure how they will perform in novel situations.
Lately, thousands of people around the world have been marveling at what large generative AI models like GPT-4 and DALL-E 2 produce in response to their prompts. None of the engineers involved in developing these AI models could tell you exactly what the models will produce. To complicate matters, such models change and evolve with more and more interaction.
All this means there is plenty of potential for misfires. Therefore, a lot depends on how AI systems are deployed and what provisions for recourse are in place when human sensibilities or welfare are hurt. AI is more of an infrastructure, like a freeway. You can design it to shape human behaviors in the collective, but you will need mechanisms for tackling abuses, such as speeding, and unpredictable occurrences, like accidents.
AI developers will also need to be inordinately creative in envisioning ways that the system might behave and try to anticipate potential violations of social standards and responsibilities. This means there is a need for regulatory or governance frameworks that rely on periodic audits and policing of AI’s outcomes and products, though I believe that these frameworks should also recognize that the systems’ designers cannot always be held accountable for mishaps.
Combining ‘soft’ and ‘hard’ approaches
Cason Schmit
Regulating AI is tricky. To regulate AI well, you must first define AI and understand anticipated AI risks and benefits. Legally defining AI is important to identify what is subject to the law. But AI technologies are still evolving, so it is hard to pin down a stable legal definition.
Understanding the risks and benefits of AI is also important. Good regulations should maximize public benefits while minimizing risks. However, AI applications are still emerging, so it is difficult to know or predict what future risks or benefits might be. These kinds of unknowns make emerging technologies like AI extremely difficult to regulate with traditional laws and regulations.
“Soft laws” are the alternative to traditional “hard law” approaches of legislation intended to prevent specific violations. In the soft law approach, a private organization sets rules or standards for industry members. These can change more rapidly than traditional lawmaking. This makes soft laws promising for emerging technologies because they can adapt quickly to new applications and risks. However, soft laws can mean soft enforcement.
Copyleft licensing allows for content to be used, reused, or modified easily under the terms of a license – for example, open-source software. The CAITE model uses copyleft licenses to require AI users to follow specific ethical guidelines, such as transparent assessments of the impact of bias.
In our model, these licenses also transfer the legal right to enforce license violations to a trusted third party. This creates an enforcement entity that exists solely to enforce ethical AI standards and can be funded in part by fines from unethical conduct. This entity is like a patent troll in that it is private rather than governmental and it supports itself by enforcing the legal intellectual property rights that it collects from others. In this case, rather than enforcement for profit, the entity enforces the ethical guidelines defined in the licenses — a “troll for good.”
This model is flexible and adaptable to meet the needs of a changing AI environment. It also enables substantial enforcement options like a traditional government regulator. In this way, it combines the best elements of hard and soft law approaches to meet the unique challenges of AI.
Four key questions to ask
John Villasenor
The extraordinary recent advances in large language model-based generative AI are spurring calls to create new AI-specific regulation. Here are four key questions to ask as that dialogue progresses:
1) Is new AI-specific regulation necessary? Many of the potentially problematic outcomes from AI systems are already addressed by existing frameworks. If an AI algorithm used by a bank to evaluate loan applications leads to racially discriminatory loan decisions, that would violate the Fair Housing Act. If the AI software in a driverless car causes an accident, products liability law provides a framework for pursuing remedies.
2) What are the risks of regulating a rapidly changing technology based on a snapshot of time? A classic example of this is the Stored Communications Act, which was enacted in 1986 to address then-novel digital communication technologies like email. In enacting the SCA, Congress provided substantially less privacy protection for emails more than 180 days old.
The logic was that limited storage space meant that people were constantly cleaning out their inboxes by deleting older messages to make room for new ones. As a result, messages stored for more than 180 days were deemed less important from a privacy standpoint. It’s not clear that this logic ever made sense, and it certainly doesn’t make sense in the 2020s, when the majority of our emails and other stored digital communications are older than six months.
A common rejoinder to concerns about regulating technology based on a single snapshot in time is this: If a law or regulation becomes outdated, update it. But this is easier said than done. Most people agree that the SCA became outdated decades ago. But because Congress hasn’t been able to agree on specifically how to revise the 180-day provision, it’s still on the books over a third of a century after its enactment.
3) What are the potential unintended consequences? The Allow States and Victims to Fight Online Sex Trafficking Act of 2017 was a law passed in 2018 that revised Section 230 of the Communications Decency Act with the goal of combating sex trafficking. While there’s little evidence that it has reduced sex trafficking, it has had a hugely problematic impact on a different group of people: sex workers who used to rely on the websites knocked offline by FOSTA-SESTA to exchange information about dangerous clients. This example shows the importance of taking a broad look at the potential effects of proposed regulations.
4) What are the economic and geopolitical implications? If regulators in the United States act to intentionally slow the progress in AI, that will simply push investment and innovation — and the resulting job creation — elsewhere. While emerging AI raises many concerns, it also promises to bring enormous benefits in areas including education, medicine, manufacturing, transportation safety, agriculture, weather forecasting, access to legal services, and more.
I believe AI regulations drafted with the above four questions in mind will be more likely to successfully address the potential harms of AI while also ensuring access to its benefits.
If you ask Alexa, Amazon’s voice assistant AI system, whether Amazon is a monopoly, it responds by saying it doesn’t know. It doesn’t take much to make it lambaste the other tech giants, but it’s silent about its own corporate parent’s misdeeds.
When Alexa responds in this way, it’s obvious that it is putting its developer’s interests ahead of yours. Usually, though, it’s not so obvious whom an AI system is serving. To avoid being exploited by these systems, people will need to learn to approach AI skeptically. That means deliberately constructing the input you give it and thinking critically about its output.
Personalized digital assistants
Newer generations of AI models, with their more sophisticated and less rote responses, are making it harder to tell who benefits when they speak. Internet companies’ manipulating what you see to serve their own interests is nothing new. Google’s search results and your Facebook feed are filled with paid entries. Facebook, TikTok and others manipulate your feeds to maximize the time you spend on the platform, which means more ad views, over your well-being.
What distinguishes AI systems from these other internet services is how interactive they are, and how these interactions will increasingly become like relationships. It doesn’t take much extrapolation from today’s technologies to envision AIs that will plan trips for you, negotiate on your behalf, or act as therapists and life coaches.
They are likely to be with you 24/7, know you intimately, and be able to anticipate your needs. This kind of conversational interface to the vast network of services and resources on the web is within the capabilities of existing generative AIs like ChatGPT. They are on track to become personalized digital assistants.
Quite possibly, it could be much worse with AI. For that AI digital assistant to be truly useful, it will have to really know you. Better than your phone knows you. Better than Google search knows you. Better, perhaps, than your close friends, intimate partners, and therapist know you.
You have no reason to trust today’s leading generative AI tools. Leave aside the hallucinations, the made-up “facts” that GPT and other large language models produce. We expect those will be largely cleaned up as the technology improves over the next few years.
But you don’t know how the AIs are configured: how they’ve been trained, what information they’ve been given, and what instructions they’ve been commanded to follow. For example, researchers uncovered the secret rules that govern the Microsoft Bing chatbot’s behavior. They’re largely benign but can change at any time.
Making money
Many of these AIs are created and trained at enormous expense by some of the largest tech monopolies. They’re being offered to people to use free of charge, or at very low cost. These companies will need to monetize them somehow. And, as with the rest of the internet, that somehow is likely to include surveillance and manipulation.
Imagine asking your chatbot to plan your next vacation. Did it choose a particular airline or hotel chain or restaurant because it was the best for you or because its maker got a kickback from the businesses? As with paid results in Google search, newsfeed ads on Facebook, and paid placements on Amazon queries, these paid influences are likely to get more surreptitious over time.
If you’re asking your chatbot for political information, are the results skewed by the politics of the corporation that owns the chatbot? Or the candidate who paid it the most money? Or even the views of the demographic of the people whose data was used in training the model? Is your AI agent secretly a double agent? Right now, there is no way to know.
Trustworthy by law
We believe that people should expect more from the technology and that tech companies and AIs can become more trustworthy. The European Union’s proposed AI Act takes some important steps, requiring transparency about the data used to train AI models, mitigation for potential bias, disclosure of foreseeable risks, and reporting on industry-standard tests.
Most existing AIs fail to comply with this emerging European mandate, and, despite recent prodding from Senate Majority Leader Chuck Schumer, the U.S. is far behind on such regulation.
The AIs of the future should be trustworthy. Unless and until the government delivers robust consumer protections for AI products, people will be on their own to guess at the potential risks and biases of AI and to mitigate their worst effects on people’s experiences with them.
So when you get a travel recommendation or political information from an AI tool, approach it with the same skeptical eye you would a billboard ad or a campaign volunteer. For all its technological wizardry, the AI tool may be little more than the same.
When I asked ChatGPT for a joke about Sicilians the other day, it implied that Sicilians are stinky.
As somebody born and raised in Sicily, I reacted to ChatGPT’s joke with disgust. But at the same time, my computer scientist brain began spinning around a seemingly simple question: Should ChatGPT and other artificial intelligence systems be allowed to be biased?
Credit: Emilio Ferrara, CC BY-ND
You might say “Of course not!” And that would be a reasonable response. But there are some researchers, like me, who argue the opposite: AI systems like ChatGPT should indeed be biased – but not in the way you might think.
Removing bias from AI is a laudable goal, but blindly eliminating biases can have unintended consequences. Instead, bias in AI can be controlled to achieve a higher goal: fairness.
Computer scientists say an AI model is biased if it unexpectedly produces skewed results. These results could exhibit prejudice against individuals or groups, or otherwise not be in line with positive human values like fairness and truth. Even small divergences from expected behavior can have a “butterfly effect,” in which seemingly minor biases can be amplified by generative AI and have far-reaching consequences.
Over months of reporting, @dinabass and I looked at thousands of images from @StableDiffusion and found that text-to-image AI takes gender and racial stereotypes to extremes worse than in the real world. 1/13 pic.twitter.com/p17YxabZU8
But systems could also be biased by design. For example, a company might design its generative AI system to prioritize formal over creative writing, or to specifically serve government industries, thus inadvertently reinforcing existing biases and excluding different views. Other societal factors, like a lack of regulations or misaligned financial incentives, can also lead to AI biases.
The challenges of removing bias
It’s not clear whether bias can – or even should – be entirely eliminated from AI systems.
Imagine you’re an AI engineer and you notice your model produces a stereotypical response, like Sicilians being “stinky.” You might think that the solution is to remove some bad examples in the training data, maybe jokes about the smell of Sicilian food. Recent research has identified how to perform this kind of “AI neurosurgery” to deemphasize associations between certain concepts.
But these well-intentioned changes can have unpredictable, and possibly negative, effects. Even small variations in the training data or in an AI model configuration can lead to significantly different system outcomes, and these changes are impossible to predict in advance. You don’t know what other associations your AI system has learned as a consequence of “unlearning” the bias you just addressed.
On the one hand, these strategies can help the model better align with human values. However, by implementing any of these approaches, developers also run the risk of introducing new cultural, ideological, or political biases.
Controlling biases
There’s a trade-off between reducing bias and making sure that the AI system is still useful and accurate. Some researchers, including me, think that generative AI systems should be allowed to be biased – but in a carefully controlled way.
For example, my collaborators and I developed techniques that let users specify what level of bias an AI system should tolerate. This model can detect toxicity in written text by accounting for in-group or cultural linguistic norms. While traditional approaches can inaccurately flag some posts or comments written in African-American English as offensive and by LGBTQ+ communities as toxic, this “controllable” AI model provides a much fairer classification.
Controllable – and safe – generative AI is important to ensure that AI models produce outputs that align with human values, while still allowing for nuance and flexibility.
Toward fairness
Even if researchers could achieve bias-free generative AI, that would be just one step toward the broader goal of fairness. The pursuit of fairness in generative AI requires a holistic approach – not only better data processing, annotation, and debiasing algorithms, but also human collaboration among developers, users, and affected communities.
As AI technology continues to proliferate, it’s important to remember that bias removal is not a one-time fix. Rather, it’s an ongoing process that demands constant monitoring, refinement, and adaptation. Although developers might be unable to easily anticipate or contain the butterfly effect, they can continue to be vigilant and thoughtful in their approach to AI bias.
With artificial intelligence (AI) flying high for Web3 and the wider the world to see and embrace, it shouldn’t be a surprise that the Federal Trade Commission (FTC) has turned its attention to OpenAI, the maker of ChatGPT, in a new investigation, according to an initial report by The Washington Post.
The agency sent the San Francisco startup a 20-page letter demanding answers for how it is addressing ongoing complaints of misuse of consumer data and cases of “hallucination,” i.e., instances where ChatGPT has made up facts or narratives that have caused reputational harm.
The FTC’s ask
OpenAI will now serve as the FTC’s first public case study for how the agency begins to enforce consumer protection warnings with respect to AI while addressing potentially unfair or deceptive trade practices. The company’s co-founder Sam Altman testified before Congress in May, inviting AI legislation to come into the mix.
In the letter, the FTC wants to gauge how well consumers understand “the accuracy or reliability of outputs” generated by the company’s AI tools, calling on OpenAI to:
Provide detailed descriptions of all the complaints the startup has received of its products, including ChatGPT, making “false, misleading, disparaging or harmful” statements about people;
Provide records related to a security incident that OpenAI disclosed in March when a system bug allowed some ChatGPT users to see payment-related information in addition to customer data from other user’s chat history;
Provide any research, testing, or surveys that assess customers’ understanding of how OpenAI’s products work, how they’re advertised, and how these AI-based tools can generate disparaging statements.
Zooming out
The FTC’s focus comes at a time when the agency wants to explore several instances of hallucination. The agency’s active mission is communicating that existing consumer protection laws apply to AI, despite the Biden Administration’s and Congress’s ongoing struggle to put together a regulatory framework.
Vice President Kamala Harris believes that we can both advance AI innovation and protect consumers, sharing the administration’s position on Wednesday at the White House, where Harris hosted a group of consumer protection advocates and civil liberties to discuss the safety and security risks of AI.
Ever since Ordinals burst onto the scene in January 2023, collectors have been chasing the latest trends and innovations happening on Bitcoin. Many Ethereum and Solana NFT collectors have shown interest in Ordinals but are often left confused by foreign concepts such as teleburning, cursed inscriptions, and rare sats.
This article breaks down the most important trends in the Ordinals ecosystem so newcomers can better understand the Ordinals market.
10. Teleburning
Teleburning is the act of teleporting an NFT to Bitcoin from another chain by burning it on the origin chain and then inscribing it onto Bitcoin. Only the on-chain owner of an NFT can teleburn it, and the act does not require the permission of the artist or founder. That said, in certain cases, the official project may take the stance that teleburning does not constitute an official transfer of the token. Thus all utility and IP rights associated with owning the teleburned token would be voided.
A teleburned NFT leverages a largely untested form of on-chain provenance where the original token technically sits in a wallet that nobody will ever be able to access on the origin chain. Then a smart contract on the origin chain provides an on-chain pointer to the new inscription on Bitcoin, where the “ownership” is transferred to.
CryptoPunk #8611. Credit: Yuga Labs
Recently, the Bitcoin Bandits Ordinal community pooled together funds to purchase CryptoPunk #8611 on Ethereum and then teleburned it to Bitcoin. They have now fractionalized it and plan to send the teleburned punk to Satoshi’s wallet.
9. File Type
Ordinals is an abstract protocol that supports far more than just JPEGs. You can inscribe any file type fully on-chain onto Bitcoin. People have inscribed songs, videos, games, books, and much more.
A particularly interesting example is a 3D file inscribed but the artist FAR. The file is a 3D rendering of a building that FAR designed, which can be viewed interactively in explorers and can even be projected into the real world with augmented reality on your iPhone.
Credit: FAR
There has also been excitement recently about music NFTs coming to Bitcoin. Artists and creators experimenting with unique file types are definitely worth paying attention to.
8. Domain Names
Similar to decentralized DNS systems on other chains like .eth and .sol, Bitcoin also has protocols for domain names. On Bitcoin, things are a bit different, though. Domains are free to claim, there are no renewal fees, and all TLDs are fair game. The first domain inscriptions were done through the SNS protocol under the .sats TLD, and now people have expanded to inscribing on hundreds of different TLDs.
The main rule is that to claim a domain, you have to be the first one to inscribe it. So, for example, if you inscribed test.sats but someone had already inscribed it, your inscription would be an invalid domain name. Ordinals marketplaces such as UniSat, Magic Eden, and Ordinals Wallet have already integrated with these protocols, which is a great start. Still, it is yet to be seen whether they will reach mass adoption on Bitcoin.
7. Cursed Inscriptions
The Ordinals protocol specifies that for an inscription to be valid, it must meet a certain set of criteria. There have been many inscriptions over the past several months that did something unique, which caused them to not be valid.
The Ordinals protocol underwent an upgrade in May, which indexes all invalid inscriptions as “Cursed” by assigning them a negative inscription number. This alleviated many people’s concerns by bringing the invalid inscriptions into the fold in a way that does not disrupt the order of the existing positive inscription numbers, which have become important to certain collectors.
6. Parent Child
The Ordinals Protocol will soon get an upgrade which will enable on-chain provenance for collections. The way this will work is every collection will have a “parent” inscription that all of the “children” inscriptions will point to. Which inscriptions are in what collection is currently a process coordinated between artists, marketplaces, and explorers off-chain.
What is interesting about this upgrade is that it will introduce potentially interesting ways to think about a collection. What if a very coveted <100 inscription was used as the parent for a collection? What if a collection consisted of children, multiple parents, and a grandparent? It is unclear exactly how creators will leverage the upgrade, but there will certainly be interesting experiments that you should pay attention to.
5. File Size
The size of a file is proportionate to the cost of inscribing it. The larger the file, the more fees you will have to pay to the Bitcoin network to “host” it. Bitcoin has a limit of 4 MB of data per block, meaning that a single inscription’s theoretical max size is 4 MB.
Inscribing over 400 KB requires working with a miner, which is a very costly and technical process, so 400 KB is the practical limit for most inscribers. In fact, out of over 14 million inscriptions, only four have been greater than 400 KB, and all of them have attempted to get to as close to 4 MB as possible. These are commonly referred to as “4 meggers” by Ordinals collectors.
Credit: Taproot Wizards
The most notable 4 megger is inscription 652, which was created by the Taproot Wizards collection in collaboration with Luxor Mining. It was mined in block 774,628, which at the time was the largest Bitcoin block ever mined.
On the other end of the spectrum, certain artists strive to create as tiny files as possible. They work within the constraints of their medium to create sub-1 KB pixel art and SVG pieces. Artists and projects taking the time to understand the protocol they are working with to the degree that it informs the art they create are probably worth paying attention to.
4. Inscription Numbers
The Ordinals Protocol is unique from other NFT standards in that it is fully on-chain. Artists must pay a fee for every file they store on Bitcoin. Because of this, the number of inscriptions is very low compared to other chains. For example, on Ethereum, an artist could write a smart contract that programmatically generates one billion NFTs for a few hundred dollars. On Bitcoin, people have paid over $50 million in fees just to make 14 million inscriptions.
This leads to people thinking about inscriptions very differently from NFTs. Subconsciously, it feels more like an actual digital object because there is an actual on-chain file that backs it up. You could even think of the Ordinals Protocol as one giant 1/1 protocol.
Inscription Numbers come into play because every inscription is assigned a number based on the order it was inscribed in. For example, the first inscription was zero, the second inscription was one, and so on. This numbering system adds a layer of collectibility that is interesting to certain people. Not only are low or special numbers inherently collectible, but these numbers also convey age and time.
Collectors have organically formed clubs around low inscription numbers which can be thought of as unofficial collections. There is a <100 Club, <1K Club, <10K Club, and even <100K Club. Despite having a mixture of many types of inscriptions in them, these clubs each maintain their own floor price. Even if someone inscribed a text file of the word “hi” in the first 100 inscriptions, it would be worth several BTC today. Without understanding inscription numbers, the Ordinals market won’t make any sense. Inscription numbers can be the difference between a fart MP3 selling for 1 BTC or 0.0001 BTC.
Inscriptions 0-3 from the <10K Club. Credit: Ordinals
Lastly, it’s important to note that you will hear many collections referring to themselves as <100K or <1 million. This is a badge of honor that they wear which signifies that they were early to Ordinals and didn’t just show up yesterday and start inscribing. The premise behind collecting low inscription numbers is that people believe that the narrative of holding one of the first 10,000 inscriptions on Bitcoin will resonate with collectors in a decade the same as it does to them today.
3. BRC-20
Fungible tokens are the backbone of any mature Web3 ecosystem. On Bitcoin, the protocol for fungible tokens is BRC-20, a meta protocol built on top of the Ordinals Protocol. For now, most BRC-20 tokens are meme coins, but there are a few utility tokens as well. As Bitcoin’s Web3 ecosystem matures, expect to see governance tokens, DeFi tokens, and many other types of fungible tokens emerge.
BRC-20 is unique in that it is a very simple protocol. It currently does not support staking or increasing supply. Some believe that its simplicity is its strength, while others believe these constraints are holding the standard back. It is likely that BRC-20 will evolve over time and that this will unlock new use cases such as stablecoins and much more.
2. Recursion
A recent upgrade to the Ordinals Protocol unlocked a bunch of new use cases for Bitcoin. Before the recursion upgrade, inscriptions were self-contained and unaware of each other. After the recursion upgrade, inscriptions can now use a special “/content/:inscription_id” syntax to request the content of other inscriptions.
This simple change unlocks many powerful use cases. For example, rather than inscribing 10,000 JPEG files for a PFP collection individually which would be quite expensive, you could inscribe the 200 traits from the collection and then make 10,000 more inscriptions that each use a small amount of code to request traits and programmatically render the image. The result is the same, except it was stored on-chain much more efficiently.
But let’s think bigger. What if we inscribe packages of code that everyone will be able to call? Well, that is exactly what OnChainMonkey did. They inscribed the p5.js and Three.js JavaScript packages fully on-chain and then used recursion to make calls to those packages from the inscriptions in their new Dimensions collection, allowing them to create beautiful 3D art in under 1 KB.
OCM Dimensions #106. Credit: OCM
The best part is that anyone can do this. Now that these packages are inscribed to Bitcoin, they are a public good for everyone to use to make cool generative art inexpensively. Not only is creating generative art on Bitcoin now much less expensive, but recursion also unlocked the ability to do provably random on-chain reveals on Bitcoin. This is a keystone of the Art Blocks experience on Ethereum and was a requirement for serious generative artists and collectors to take Ordinals seriously. Now that you can effectively do everything Art Blocks does but on Bitcoin, expect higher quality generative art collections to be created with Ordinals over the coming months.
1. Rare Sats
Rare sats, also called exotic sats, or unique sats, are any sats that have some sort of special meaning. To understand rare sats, you must first understand what a sat is and its role in the Ordinals protocol. Every Bitcoin is made up of 100 million sats. The Ordinals Protocol introduces an indexing method that allows all 2.1 quadrillion sats to be individually numbered and tracked as they move throughout the Bitcoin network.
This effectively turns every sat into an NFT, so, of course, collectors find value in more unique ones. For example, sats from the 10,000 BTC transaction used to purchase two Papa John’s pizzas in May 2010 are interesting to collectors. There is a very high supply of them, however, there is a “cool factor” that is hard to ignore. You could sort of think of them as a meme coin. Another example is sats from Block 9. These are the oldest sats in circulation and were mined by Satoshi Nakamoto and sent to Hal Finney in the first Bitcoin transaction ever.
Casey Rodarmor, the creator of the Ordinals Protocol, also built his own rarity system into Ordinals, which establishes arbitrary layers of rarity such as Uncommon, Rare, and Epic. An Uncommon sat, for example, is the first sat mined in every block. When a sat is not inscribed on, it is referred to as a “virgin sat.”
Things get interesting when artists combine rare sats with inscriptions. Because every inscription points to a sat which is how its ownership is tracked, some artists choose to inscribe on special sats to amplify their art. For example, the rare sat hunter and artist, Nullish inscribed art of the number nine on the first nine inscriptions on Block 9 sats. Not only can certain rare sats underpin the value of an inscription, but they can also be incorporated into the art itself in a way that could only be done on Bitcoin.
Leonidas is a self-proclaimed NFT historian and Ordinals collector who co-hosts The Ordinal Showand is currently building the Ord.io platform.
The explosion of interest in artificial intelligence has drawn attention not only to the astonishing capacity of algorithms to mimic humans but to the reality that these algorithms could displace many humans in their jobs. The economic and societal consequences could be nothing short of dramatic.
The route to this economic transformation is through the workplace. A widely circulated Goldman Sachs study anticipates that about two-thirds of current occupations over the next decade could be affected, and a quarter to a half of the work people do now could be taken over by an algorithm. Up to 300 million jobs worldwide could be affected. The consulting firm McKinsey released its own study predicting an AI-powered boost of US$4.4 trillion to the global economy every year.
The implications of such gigantic numbers are sobering, but how reliable are these predictions?
I lead a research program called Digital Planet that studies the impact of digital technologies on lives and livelihoods around the world and how this impact changes over time. A look at how previous waves of such digital technologies as personal computers and the internet affected workers offers some insight into AI’s potential impact in the years to come. But if the history of the future of work is any guide, we should be prepared for some surprises.
The IT revolution and the productivity paradox
A key metric for tracking the consequences of technology on the economy is growth in worker productivity – defined as how much output of work an employee can generate per hour. This seemingly dry statistic matters to every working individual because it ties directly to how much a worker can expect to earn for every hour of work. Said another way, higher productivity is expected to lead to higher wages.
Generative AI products are capable of producing written, graphic, and audio content or software programs with minimal human involvement. Professions such as advertising, entertainment, and creative and analytical work could be among the first to feel the effects. Individuals in those fields may worry that companies will use generative AI to do jobs they once did, but economists see great potential to boost productivity of the workforce as a whole.
The Goldman Sachs study predicts productivity will grow by 1.5 percent per year because of the adoption of generative AI alone, which would be nearly double the rate from 2010 and 2018. McKinsey is even more aggressive, saying this technology and other forms of automation will usher in the “next productivity frontier,” pushing it as high as 3.3 percent a year by 2040.
That sort of productivity boost, which would approach rates of previous years, would be welcomed by both economists and, in theory, workers as well.
If we were to trace the 20th-century history of productivity growth in the U.S., it galloped along at about 3 percent annually from 1920 to 1970, lifting real wages and living standards. Interestingly, productivity growth slowed in the 1970s and 1980s, coinciding with the introduction of computers and early digital technologies. This “productivity paradox” was famously captured in a comment from MIT economist Bob Solow: You can see the computer age everywhere but in the productivity statistics.
Annual labor productivity growth rate in nonfarm business sector from 1994 to 2018. Credit: The Conversation, CC-BY-ND/ U.S. Bureau of Labor Statistics
Digital technology skeptics blamed “unproductive” time spent on social media or shopping and argued that earlier transformations, such as the introductions of electricity or the internal combustion engine, had a bigger role in fundamentally altering the nature of work. Techno-optimists disagreed; they argued that new digital technologies needed time to translate into productivity growth because other complementary changes would need to evolve in parallel. Yet others worried that productivity measures were not adequate in capturing the value of computers.
For a while, it seemed that the optimists would be vindicated. In the second half of the 1990s, around the time the World Wide Web emerged, productivity growth in the U.S. doubled, from 1.5 percent per year in the first half of that decade to 3 percent in the second. Again, there were disagreements about what was really going on, further muddying the waters as to whether the paradox had been resolved. Some argued that, indeed, the investments in digital technologies were finally paying off, while an alternative view was that managerial and technological innovations in a few key industries were the main drivers.
Regardless of the explanation, just as mysteriously as it began, that late 1990s surge was short-lived. So despite massive corporate investment in computers and the internet – changes that transformed the workplace – how much the economy and workers’ wages benefited from technology remained uncertain.
Early 2000s: New slump, new hype, new hopes
While the start of the 21st century coincided with the bursting of the so-called dot-com bubble, the year 2007 was marked by the arrival of another technology revolution: the Apple iPhone, which consumers bought by the millions and which companies deployed in countless ways. Yet labor productivity growth started stalling again in the mid-2000s, ticking up briefly in 2009 during the Great Recession, only to return to a slump from 2010 to 2019.
Throughout this new slump, techno-optimists were anticipating new winds of change. AI and automation were becoming all the rage and were expected to transform work and worker productivity. Beyond traditional industrial automation, drones, and advanced robots, capital and talent were pouring into many would-be game-changing technologies, including autonomous vehicles, automated checkouts in grocery stores, and even pizza-making robots. AI and automation were projected to push productivity growth above 2 percent annually in a decade, up from the 2010-2014 lows of 0.4 percent.
But before we could get there and gauge how these new technologies would ripple through the workplace, a new surprise hit: the COVID-19 pandemic.
The pandemic productivity push – then bust
Devastating as the pandemic was, worker productivity surged after it began in 2020; output per hour worked globally hit 4.9 percent, the highest recorded since data has been available.
Much of this steep rise was facilitated by technology: larger knowledge-intensive companies – inherently the more productive ones – switched to remote work, maintaining continuity through digital technologies such as videoconferencing and communications technologies such as Slack, and saving on commuting time and focusing on well-being.
While it was clear digital technologies helped boost productivity of knowledge workers, there was an accelerated shift to greater automation in many other sectors, as workers had to remain home for their own safety and comply with lockdowns. Companies in industries ranging from meat processing to operations in restaurants, retail, and hospitality invested in automation, such as robots and automated order-processing and customer service, which helped boost their productivity.
But then there was yet another turn in the journey along the technology landscape.
In parallel, with little warning, “generative AI” burst onto the scene, with an even more direct potential to enhance productivity while affecting jobs – at massive scale. The hype cycle around new technology restarted.
Looking ahead: Social factors on technology’s arc
Given the number of plot twists thus far, what might we expect from here on out? Here are four issues for consideration.
First, the future of work is about more than just raw numbers of workers, the technical tools they use, or the work they do; one should consider how AI affects factors such as workplace diversity and social inequities, which in turn have a profound impact on economic opportunity and workplace culture.
For example, while the broad shift toward remote work could help promote diversity with more flexible hiring, I see the increasing use of AI as likely to have the opposite effect. Black and Hispanic workers are overrepresented in the 30 occupations with the highest exposure to automation and underrepresented in the 30 occupations with the lowest exposure. While AI might help workers get more done in less time, and this increased productivity could increase wages of those employed, it could lead to a severe loss of wages for those whose jobs are displaced. A 2021 paper found that wage inequality tended to increase the most in countries in which companies already relied a lot on robots and that were quick to adopt the latest robotic technologies.
Second, as the post-COVID-19 workplace seeks a balance between in-person and remote working, the effects on productivity – and opinions on the subject – will remain uncertain and fluid. A 2022 study showed improved efficiencies for remote work as companies and employees grew more comfortable with work-from-home arrangements, but according to a separate 2023 study, managers and employees disagree about the impact: The former believe that remote working reduces productivity, while employees believe the opposite.
Third, society’s reaction to the spread of generative AI could greatly affect its course and ultimate impact. Analyses suggest that generative AI can boost worker productivity on specific jobs – for example, one 2023 study found the staggered introduction of a generative AI-based conversational assistant increased productivity of customer service personnel by 14 percent. Yet there are already growing calls to consider generative AI’s most severe risks and to take them seriously. On top of that, recognition of the astronomical computing and environmental costs of generative AI could limit its development and use.
Finally, given how wrong economists and other experts have been in the past, it is safe to say that many of today’s predictions about AI technology’s impact on work and worker productivity will prove to be wrong as well. Numbers such as 300 million jobs affected or $4.4 trillion annual boosts to the global economy are eye-catching, yet I think people tend to give them greater credibility than warranted.
Also, “jobs affected” does not mean jobs lost; it could mean jobs augmented or even a transition to new jobs. It is best to use the analyses, such as Goldman’s or McKinsey’s, to spark our imaginations about the plausible scenarios about the future of work and of workers. It’s better, in my view, to then proactively brainstorm the many factors that could affect which one actually comes to pass, look for early warning signs and prepare accordingly.
The history of the future of work has been full of surprises; don’t be shocked if tomorrow’s technologies are equally confounding.
Retail giants across the United States, including Walmart, Kroger, Meijer, and Whole Foods, have recently become the targets of a series of hoax bomb threats. While these threats have, to date, proven to be unfounded, they have created a climate of apprehension and disruption in stores across the nation.
The perpetrators of these threats have maintained their anonymity by using blocked phone numbers and have demanded ransom payments in various forms, including Bitcoin, gift cards, and cash.
In one instance, an anonymous caller claimed to have planted a pipe bomb in a suburban Whole Foods store in Chicago, demanding $5,000 in Bitcoin. Similarly, a Kroger store in New Mexico was threatened with the detonation of a bomb unless a money transfer was made to the caller. In both cases, the stores were evacuated, and law enforcement was summoned, but no bombs were found.
The FBI and other law enforcement agencies are investigating these threats. Retailers have been urged to report any potential threats to 911 immediately, obtain recordings of the call, and contact their local FBI offices as a matter of urgency.
These threats have been a source of significant disruption for retailers, forcing store closures and the evacuation of customers. Retailers have been implementing new safety protocols since the incidents escalated this spring.
“It’s disruptive,” Doug Baker, vice president of industry relations at food trade group FMI, told The Wall Street Journal. “If I’m a retailer…I’ve gotta close stores and have to call law enforcement. And send customers out.”
Speaking to The Journal, Retail Industry Leaders Association senior vice president Lisa Bruno called the hoaxes “another evolving scam” targeting retailers.
Scammers often demand ransoms in cryptocurrencies like Bitcoin because they can be sent quickly and anonymously without using a bank as an intermediary. At the time of writing, Bitcoin was trading at $30,235, reflecting a decrease of 0.68% in the last 24 hours.
Editor’s note: This article was written by an nft now staff member in collaboration with OpenAI’s GPT-4.
At the anime PFP project’s highly-anticipated “Follow the Rabbit” event at Hakkasan Las Vegas last night, Azuki co-founder Zagabond announced the forthcoming Azuki Elementals collection in familiar fashion.
“Check your motherf***ing wallets,” he declared against a red backdrop in a video reminiscent of the one preceding the Beanz announcement, sending cheering attendees scrambling to follow suit.
Spanning four different “domains” and rarity tiers, Elementals is a sister collection of 20,000 NFTs that will be available for sale on June 27. All Azkui and Beanz holders will receive presale access at 9 am PT on that day.
All Azuki holders were airdropped an unrevealed and locked Elemental that will unlock for transfer following the sale. Additionally, all Azuki and Beanz holders were airdropped a Soulbound Token (SBT) to memorialize the event.
Launched in January 2022, Azuki has overcome controversy to emerge as one of the most successful PFP projects in the NFT space. At the time of publication, the project’s floor price sits just above 15 ETH ($28,500).
This is a developing story and will be updated as new information comes in.