How to Build Dynamic Websites Powered by Automated AI-Generated Content
Wouldn’t it be cool if you could make a website that generates its own content all by itself? Well, good news — you can!
I’ll take you through how I build Not Yet News, a satirical news website that takes today’s headlines and projects them 100 years into the future. Rather than waiting for humans to write its articles, the site crafts its own stories every day, generated by AI.
TLDR:
Demo site: notyet.news
Source code: github.com/johnpolacek/notyetnew
Content Generation
First up, you’ll have to put on your Prompt Engineer hat. Try out some prompts with ChatGPT. See how you can change the input to get the type of content that is suitable to your use case. Think about the kinds of structured data you want. In my case, I want news articles.
A trick I learned from Jared Palmer is to use OpenAI Functions to get structured JSON in the API response. With this technique, you can use your creativity to finesse a prompt to get interesting responses, but do so in a way that allows you to build a content generation service around it.
Here’s what I ended up with:
import { Configuration, OpenAIApi } from "openai";
import { config } from 'dotenv';
config();
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
export async function generateParody(article: string) {
console.log('generateParody for '+article)
const response = await openai.createChatCompletion({
model: 'gpt-4',
messages: [
{
role: 'user',
content: `Generate a speculative satirical article in the style of The New York Times set 100 years into the future on the theme from this article from today's news: "${article}". Factor in how drastically different the world would be in 100 years. Create new personas and identities for people in this speculative vision of the future based on the news of today and how things will change in 100 years.`
}
],
functions: [{
name: 'print',
description: 'Prints a news article in json format',
parameters: {
type: 'object',
properties: {
title: {
type: 'string',
description: 'Title of the article'
},
abstract: {
type: 'string',
description: 'Brief description of the article'
},
content: {
type: 'string',
description: 'Text content of the article'
},
imageDescription: {
type: 'string',
description: 'A text prompt to provide DALLE so it can generate a main image to go along with the article'
},
}
}
}]
})
try {
const argumentsString = response.data.choices[0].message?.function_call?.arguments;
if (argumentsString) {
const dataString = argumentsString.replace(/\\n/g, '').replace(/\\r/g, '').replace(/\n/g, '').replace(/\r/g, '');
console.log('parsing data...')
const data = JSON.parse(dataString)
console.log('data parsed!')
return data
} else {
console.error('Failed to parse JSON', {response});
return new Error('Failed to parse JSON')
}
} catch (error) {
console.error('Failed to parse JSON', error);
return error
}
}
If you simply ask ChatGPT to generate news articles set in the future, you will quickly discover that it gets very repetitive. To avoid this, we can seed the request with content from today’s headlines via the NYTimes API.
Image Generation
Text content is nice, but a website without images is a little boring. Midjourney is the king of AI image generation but until they have an API you can’t use them to automate image generation.
Instead, we can use DALLE, also from OpenAI. You can send a prompt and DALLE will generate an image (unless it violates their terms of service, which I ran into when generating images political in nature).
You can do something like this:
import fetch from 'node-fetch';
import { config } from 'dotenv';
config();
export async function generateImage(prompt: String) {
const apiKey = process.env.OPENAI_API_KEY
const headers = {
Authorization: `Bearer ${apiKey}`,
"Content-Type": "application/json",
}
const req = {
method: "POST",
headers: headers,
body: JSON.stringify({
prompt,
n: 1,
size: "512x512"
}),
}
try {
const response = await fetch(
"https://api.openai.com/v1/images/generations",
req
)
const result:any = await response.json()
if (response.ok) {
return result.data[0].url
} else {
throw new Error(result.error || "Failed to generate image")
}
} catch (error) {
throw new Error("Failed to generate image")
}
}
Storing Your Content
You’ll need to store your JSON data somewhere. Also, the images generated by DALLE are at a temporary URL so you will need to store them somewhere more permanent if you want them to be around for more than a few hours.
For storage, S3 seems like the obvious choice so let’s do that. We can write some node functions using the AWS SDK to take care of this.
import fetch from 'node-fetch';
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
import { config } from 'dotenv';
config();
const s3 = new S3Client({
region: process.env.AWS_REGION,
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID || '',
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY || ''
}
});
export const uploadJSONToS3 = async (json: string, key: string) => {
try {
if (!process.env.AWS_S3_BUCKET_NAME) {
throw new Error('AWS_S3_BUCKET_NAME is not set in the environment variables');
}
const command = new PutObjectCommand({
Bucket: process.env.AWS_S3_BUCKET_NAME,
Key: key,
Body: json,
ContentType: 'application/json'
});
await s3.send(command);
const publicUrl = `https://${process.env.AWS_S3_BUCKET_NAME}.s3.amazonaws.com/${key}`
console.log("JSON uploaded successfully:", publicUrl)
return publicUrl
} catch (error) {
console.error("Error uploading json to S3:", error)
throw new Error("Error uploading json to S3")
}
};
export const transferImageToS3 = async (
imageUrl: string,
key: string
): Promise<string> => {
if (!process.env.AWS_S3_BUCKET_NAME) {
throw new Error('S3_BUCKET_NAME is not set in the environment variables');
}
try {
// Download the image from the URL
console.log("Downloading image from URL:", imageUrl)
const response = await fetch(imageUrl)
console.log("Image downloaded successfully")
const arrayBuffer = await response.arrayBuffer()
const command = new PutObjectCommand({
Bucket: process.env.AWS_S3_BUCKET_NAME,
Key: key,
Body: Buffer.from(arrayBuffer),
ContentType: response.headers.get("content-type") || 'image/jpeg', // Or use a more specific MIME type based on the image format
});
await s3.send(command);
// Construct the public URL for the image
const publicUrl = `https://${process.env.AWS_S3_BUCKET_NAME}.s3.amazonaws.com/${key}`
console.log("Image uploaded successfully:", publicUrl)
return publicUrl
} catch (error) {
console.error("Error uploading image to S3:", error)
throw new Error("Error uploading image to S3")
}
}
Automation
Once we have a script that generates structured data and image content for the site, we’ll want to automate it so that our website is always fresh. In the case of my parody news site, once a day seems right.
First we need the script that generates the content.
import fetch from 'node-fetch';
import { uploadJSONToS3, transferImageToS3 } from "./aws/s3";
import { generateParody } from "./openai/generateParody"
import { generateImage } from './openai/generateImage';
import { config } from 'dotenv';
config();
async function main() {
const url = 'https://api.nytimes.com/svc/topstories/v2/home.json?api-key=' + process.env.NYT_API_KEY;
const response = await fetch(url);
const data:any = await response.json();
// Process each article
const newArticles = [];
for (const article of data.results) {
// Generate parody content
let newArticle
let imageDescription
try {
const articleData = await generateParody(article.title + ' - ' + article.abstract);
newArticle = {
title: articleData.title,
abstract: articleData.abstract,
content: articleData.content,
imageUrl: ''
}
imageDescription = articleData.imageDescription
} catch (error) {
console.log('Error on article generation', error)
return null
}
// Generate image
try {
console.log(`Generate image #${(newArticles.length+1)}`)
const generatedImageUrl = await generateImage(imageDescription);
newArticle.imageUrl = await transferImageToS3(generatedImageUrl, 'notyetnews-'+Date.now()+'.png')
} catch (error) {
console.log('Image generation error')
}
// Add to parody articles
if (newArticle) {
newArticles.push(newArticle)
}
}
// Convert parody articles to JSON string
const json = JSON.stringify(newArticles);
// Upload JSON to S3
const date = new Date();
const year = date.getUTCFullYear();
const month = ('0' + (date.getUTCMonth() + 1)).slice(-2); // Months are 0-based, so we add 1
const day = ('0' + date.getUTCDate()).slice(-2); // Add leading 0 if needed
const filename = `notyetnews-${year}-${month}-${day}.json`;
console.log('uploading json...')
const responseS3 = await uploadJSONToS3(json, filename);
console.log('Uploaded to S3: '+ responseS3);
}
main().catch(error => {
console.error("Error running the script:", error);
});
We can run this script via CLI with something like this (note: install ts-node for running TypeScript):
ts-node your-script.ts
You’ll want to put this in your package.json
as a script.
{
"name": "notyet.news",
"version": "0.1.0",
"private": true,
"scripts": {
"dev": "next dev",
"build": "next build",
"start": "next start",
"lint": "next lint",
"cron": "ts-node cron/cron.ts",
},
...
Once you are able use this script locally to generate content, the next step is to set up a cron job to execute it on a regular interval.
There are many ways to do this, but I like using Render’s Cron Jobs because it can connect to a Github project and for a minimal price run a cron script at any interval you like. Other providers I found do not allow for long running processes (quality AI generation takes time!) or are overly complicated VM setups for this scenario. Let’s keep it simple!
Shipping the Website
The hard part is done. We have content in the form of JSON and images uploaded to S3.
Now, we can make a fairly vanilla Next.js page that loads the JSON file from S3 and then renders the content.
import Main from './components/main'
async function getNewsArticles() {
let date = new Date();
let attempts = 0;
let maxAttempts = 10;
let res;
while (attempts < maxAttempts) {
const dateString = date.toISOString().split('T')[0];
res = await fetch(`https://notyetnews.s3.us-east-1.amazonaws.com/notyetnews-${dateString}.json`, { next: { revalidate: 60 } });
if (res.ok) {
// Return both the articles and the date
return { articles: await res.json(), date: dateString };
}
// If not successful, set date to the previous day and increment the attempts count
date.setDate(date.getDate() - 1);
attempts++;
}
throw new Error('Failed to fetch data after 10 attempts');
}
export default async function Home() {
const { articles, date } = await getNewsArticles();
return (
<Main theme="news" slug={date} articles={articles} />
);
}
import Image from 'next/image'
import Link from 'next/link'
import Header from './header'
import Footer from './footer'
import { NewsArticle } from '../types'
export default function Main({theme, slug, articles}: {theme: string, slug: string, articles: NewsArticle[]}) {
const articleData = articles.filter((article) => (article.title && article.abstract)).map((a, i) => {
return {...a, index: i}
})
const topArticles = articleData.slice(0,12)
const otherArticles = articleData.slice(12)
const batchedArticles = []
for(let i = 0; i < otherArticles.length; i += 3) {
batchedArticles.push(otherArticles.slice(i, i+3));
}
return (
<main className="flex min-h-screen flex-col items-center justify-between p-8 xl:p-16 max-w-[1280px] mx-auto">
<Header />
<div className="lg:grid lg:grid-cols-5 gap-8">
<div className="col-span-3 divide-y divide-[#aaa] -mt-8">
{topArticles.map((article, i) => (
<Link href={`/${theme}/${slug}/${article.index+1}`} className="flex flex-col-reverse md:flex-row gap-4 md:gap-8 py-8" key={`article-${i}`}>
<div className="w-full md:w-2/5">
<h3 className="text-xl font-semibold pb-2 ">{article.title}</h3>
<p>{article.abstract}</p>
<p className="text-indigo-600 italic py-2 block">Read more...</p>
</div>
<div className="w-full md:w-3/5 grow md:pl-4">
<Image className='w-full h-auto' alt="" src={article.imageUrl || `/placeholder${Math.round(Math.random() * 5)}.png`} width={180} height={180} />
</div>
</Link>
))}
</div>
<div className="col-span-2 lg:pl-8 lg:border-l lg:border-l-[#aaa]">
{batchedArticles.map((batch, i) => (
<>
<Link href={`/${theme}/${slug}/${batch[0].index+1}`} className="flex flex-col gap-8 pb-8" key={`article-${i}`}>
<Image className='w-full h-auto' alt="" src={batch[0].imageUrl || `/placeholder${Math.round(Math.random() * 6)}.png`} width={180} height={180} />
<div className="pb-4">
<h3 className="text-xl font-semibold pb-2 ">{batch[0].title}</h3>
<p>{batch[0].abstract}</p>
<p className="text-indigo-600 italic py-2 block">Read more...</p>
</div>
</Link>
{
batch.length === 3 && (
<div className="grid grid-cols-2 divide-x divide-[#aaa] pb-8">
<Link href={`/${theme}/${slug}/${batch[1].index+1}`} className="pr-8">
<Image className='pb-2 w-full h-auto' alt="" src={batch[1].imageUrl || `/placeholder${Math.round(Math.random() * 6)}.png`} width={180} height={180} />
<h3 className="text-sm font-semibold pb-2 ">{batch[1].title}</h3>
<p className="text-indigo-600 italic py-2 block text-sm">Read more...</p>
</Link>
<Link href={`/${theme}/${slug}/${batch[2].index+1}`} className="pl-8">
<Image className='pb-2 w-full h-auto' alt="" src={batch[2].imageUrl || `/placeholder${Math.round(Math.random() * 6)}.png`} width={180} height={180} />
<h3 className="text-sm font-semibold pb-2 ">{batch[2].title}</h3>
<p className="text-indigo-600 italic py-2 block text-sm">Read more...</p>
</Link>
</div>)
}
</>
))}
</div>
</div>
<div className="w-full border-t border-gray-400 mt-12 pt-8">
<Footer />
</div>
</main>
)
}
Check out the full project at https://github.com/johnpolacek/notyetnews and the live site at notyet.news.