The social network X (formerly Twitter) has introduced an image generation tool powered by the Grok-2 model, available exclusively to X Premium and X Premium+ subscribers. Since its launch on Wednesday (14), users have been experimenting with the new feature, producing images that might be restricted on other AI platforms.
Among the generated content, there have been instances of violence, drug promotion, controversial symbols like those associated with Nazism, and deepfakes featuring public figures.
AI Policies for Image Generation
The content restrictions are the main difference between Grok-2 on X (formerly Twitter) and other popular image generators like OpenAI’s DALL-E 3 and Midjourney.
Grok-2 on X allows for a broader range of content, including material that might be flagged or restricted on other platforms. Users have generated images featuring violence, drug advocacy, controversial symbols, and deepfakes of public figures, which might be limited or prohibited elsewhere.
In contrast, DALL-E 3, available on platforms like ChatGPT and Microsoft Copilot, follows strict guidelines to ensure ethical and legal standards. It prohibits the creation of images involving copyrighted characters, real public figures, explicit content, violence, or hate speech. DALL-E 3 also adds watermarks to indicate AI-generated images.
Midjourney also enforces clear rules to prevent the creation of images that could be used for harassment or defamation or that depict explicit, violent, or disturbing content. It restricts political imagery and content that could spread misinformation or be offensive to communities, including racism, homophobia, and other forms of prejudice.
Unrestricted image generation
Despite being in Beta and available only to subscribers, X (formerly Twitter)’s Grok-2 image generation tool has sparked debate due to its lack of restrictions. This flexibility contrasts sharply with the more controlled environments of other AI tools like DALL-E 3 and Midjourney.
The absence of content limitations in Grok-2 raises significant concerns about the responsibility of platforms in moderating user-generated content. One major issue is the potential for creating deepfakes—manipulated images that depict public figures in fabricated or controversial scenarios. This capability could be exploited to create misleading or damaging content, particularly during sensitive periods like election campaigns. Such misuse could mislead public opinion, damage reputations, and shift focus from important discussions.
Unlike competitors, Grok-2 does not apply watermarks to generated images, making identifying and regulating AI-generated content harder. This lack of oversight contrasts with practices used by tools like OpenAI’s DALL-E, which uses watermarks to indicate AI origins and manage content distribution.
The broader implications of Grok-2’s unrestricted use highlight the need to consider legal and ethical boundaries carefully in AI-generated visual content.