Upload
This plugin allows you to upload files to Amazon S3 bucket.
Installation
npm i @adminforth/upload --save
S3
- Go to https://aws.amazon.com and login.
- Go to Services -> S3 and create a bucket. Put in bucket name e.g.
my-reality-bucket
. Leave all settings unchanged (ACL Disabled, Block all public access - checked) - Go to bucket settings, Permissions, scroll down to Cross-origin resource sharing (CORS) and put in the following configuration:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT"
],
"AllowedOrigins": [
"http://localhost:3500"
],
"ExposeHeaders": []
}
]
☝️ In AllowedOrigins add all your domains. For example if you will serve admin on
https://example.com/admin
you should add"https://example.com"
to AllowedOrigins:[
"https://example.com",
"http://localhost:3500"
]Every character matters, so don't forget to add
http://
orhttps://
!
- Go to Services -> IAM and create a new user. Put in user name e.g.
my-reality-user
. - Attach existing policies directly ->
AmazonS3FullAccess
. Go to your user ->Add permissions
->Attach policies directly
->AmazonS3FullAccess
- Go to Security credentials and create a new access key. Save
Access key ID
andSecret access key
. - Add credentials in your
.env
file:
...
NODE_ENV=development
AWS_ACCESS_KEY_ID=your_access_key_id
AWS_SECRET_ACCESS_KEY=your_secret_access_key
Now add a column for storing the path to the file in the database, add this statement to the ./schema.prisma
:
model apartments {
id String @id
created_at DateTime?
title String
square_meter Float?
price Decimal
number_of_rooms Int?
description String?
country String?
listed Boolean
realtor_id String?
apartment_image String?
}
Migrate prisma schema:
npx prisma migrate dev --name add-apartment-image
Add column to aparts
resource configuration:
import UploadPlugin from '@adminforth/upload';
import { v4 as uuid } from 'uuid';
export const admin = new AdminForth({
...
resourceId: 'aparts',
columns: [
...
{
name: 'apartment_image',
showIn: [], // You can set to ['list', 'show'] if you wish to show path column in list and show views
}
...
],
plugins: [
...
new UploadPlugin({
pathColumnName: 'apartment_image',
s3Bucket: 'my-bucket', // ❗ Your bucket name
s3Region: 'us-east-1', // ❗ Selected region
s3AccessKeyId: process.env.AWS_ACCESS_KEY_ID,
s3SecretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
allowedFileExtensions: ['jpg', 'jpeg', 'png', 'gif', 'webm', 'webp'],
maxFileSize: 1024 * 1024 * 20, // 20 MB
s3Path: ({originalFilename, originalExtension, contentType}) =>
`aparts/${new Date().getFullYear()}/${uuid()}-${originalFilename}.${originalExtension}`,
})
]
...
});
Here you can see how the plugin works:
This setup will upload files to S3 bucket with private ACL and save path to file (relative to bucket root) in apartment_image
column.
Once you will go to show or list view of aparts
resource you will see preview of uploaded file by using presigned temporary URLs
which are generated by plugin:
☝ When upload feature is used on record which already exists in database (from 'edit' page), s3path callback will receive additional parameter
record
with all values of record. Generally we don't recommend denormalizing any state of record into s3 path (and instead store links to unique path on s3 in the record field, like in example above). But if you are 100% sure this kind of sate will be static, you might link to it:s3Path: ({originalExtension, record}) => `game_images/${record.static_game_code}.${originalExtension}`
! Please note that when upload is done from create view, record will be
undefined
.
If you want to draw such images in main non-admin app e.g. Nuxt, you should generate presigned URLs by yourself. Here is NodeJS an example of how to do it:
import AWS from 'aws-sdk';
const s3 = new AWS.S3({
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
region: 'us-east-1',
});
export async function getPresignedUrl(s3Path: string): Promise<string> {
return s3.getSignedUrlPromise('getObject', {
Bucket: 'my-bucket',
Key: s3Path,
Expires: 60 * 60, // 1 hour
});
}
Alternatively, if you don't want to generate presigned URLs, you might want to make all objects public. Then you will be able concatenate backet base domain and path stored in db, and use it as source of image. Let's consider how to do it.
S3 upload with public access
- First of all go to your bucket settings, Permissions, scroll down to Block public access (bucket settings for this bucket) and uncheck all checkboxes.
- Go to bucket settings, Permissions, Object ownership and select "ACLs Enabled" and "Bucket owner preferred" radio buttons.
Then you can change ACL in plugin configuration:
new UploadPlugin({
pathColumnName: 'apartment_image',
s3Bucket: 'my-bucket',
s3Region: 'us-east-1',
s3AccessKeyId: process.env.AWS_ACCESS_KEY_ID,
s3SecretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
s3ACL: 'public-read',
allowedFileExtensions: ['jpg', 'jpeg', 'png', 'gif', 'webm'],
maxFileSize: 1024 * 1024 * 20, // 5MB
s3Path: ({originalFilename, originalExtension, contentType}) =>
`aparts/${new Date().getFullYear()}/${uuid()}-${originalFilename}.${originalExtension}`,
})
Now every uploaded file will be public so in your custom app you can easily concatenate bucket URL with s3Path
to get public URL:
export async function getPublicUrl(s3Path: string): string {
return `https://my-bucket.s3.${region}.amazonaws.com/${s3Path}`
}
For preview in AdminForth plugin will still use presigned URLs, but you can change it by providing previewUrl
function in plugin configuration:
preview: {
previewUrl: ({s3Path}) => `https://my-bucket.s3.us-east-1.amazonaws.com/${s3Path}`,
}
Make sure that you change "my-bucket" and "us-east-1" to your own settings.
Also you might want to put CDN in front of your bucket, for example CloudFlare. In this case
we recommend route all AdminForth previews over CDN as well for faster worm up and better performance.
If for example your domain is my-domain.com
and you bucket has name static.my-domain.com
you should change preview URL like this:
preview: {
showInList: true,
previewUrl: ({s3Path}) => `https://my-bucket.s3.us-east-1.amazonaws.com/${s3Path}`,
previewUrl: ({s3Path}) => `https://static.my-domain.com/${s3Path}`,
}
Also you will have to enable static website hosting in your bucket settings and set index.html and error.html to empty strings.
Image generation
Upload plugin supports AI generation for images
new UploadPlugin({
...
generation: {
provider: 'openai-dall-e',
countToGenerate: 2, // how much images generate in one shot
openAiOptions: {
model: 'dall-e-3', // one of models from OpenAI docs https://platform.openai.com/docs/api-reference/images/create
size: '1792x1024', // make sure that size is supported by model using OpenAI docs
apiKey: process.env.OPENAI_API_KEY as string,
},
fieldsForContext: ['title'],
},
Here is how it works:
Rate limits
You can set rate limits for image generation per IP address:
new UploadPlugin({
...
generation: {
...
rateLimit: {
limit: '5/12h', // up to 5 times per 12 hour
errorMessage: 'You exhausted your image generation limit 5 times per 12 hours, please try again later',
}
...
});
Max-width for preview image
You can set the maximum width for the preview image in the ./resources/apartments.ts
file by adding the maxWidth
property to the preview
configuration:
...
new UploadPlugin({
pathColumnName: 'apartment_image',
s3Bucket: 'my-bucket', // ❗ Your bucket name
s3Region: 'us-east-1', // ❗ Selected region
s3AccessKeyId: process.env.AWS_ACCESS_KEY_ID,
s3SecretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
allowedFileExtensions: ['jpg', 'jpeg', 'png', 'gif', 'webm', 'webp'],
maxFileSize: 1024 * 1024 * 20, // 20 MB
s3Path: ({originalFilename, originalExtension, contentType}) =>
`aparts/${new Date().getFullYear()}/${uuid()}-${originalFilename}.${originalExtension}`,
preview: {
showInList: true,
maxWidth: '200px', // Set the maximum width for the preview image
...
}
})
...
});