Skip to main content

Performance & Optimization

This page explains the recommended production setup for RoboExchanger when you want better speed, better stability, and fewer support problems.

This is the best practical stack for serious usage:

  • PHP 8.3 or higher
  • production mode enabled
  • Redis for cache, session, and queue
  • Supervisor for queue workers
  • S3-compatible object storage for uploads
  • cron only for scheduler tasks
  • Laravel optimize cache during deployment

Related docs:

Best production setup

If you want the best normal production setup for RoboExchanger, use this:

AreaRecommended option
PHP8.3+ with OPcache enabled
Web serverNginx + PHP-FPM or a properly tuned Apache setup
DatabaseMySQL or MariaDB on SSD
CacheRedis
SessionRedis
QueueRedis
Queue workerSupervisor
File storages3 disk
Object storage providerAWS S3, MinIO, or RustFS
Schedulercron with php artisan schedule:run

For a serious live server, these are the recommended application values:

APP_ENV=production
APP_DEBUG=false

REDIS_CLIENT=phpredis

CACHE_STORE=redis
SESSION_DRIVER=redis
SESSION_CONNECTION=default
QUEUE_CONNECTION=redis

FILESYSTEM_DISK=s3

If you stay on local storage or file/database drivers, the script can still work, but that is not the best performance setup.

Redis for cache, session, and queue

Redis is the recommended production option for all three:

  • cache
  • session
  • queue

This project already supports Redis in:

  • config/cache.php
  • config/session.php
  • config/queue.php
  • config/database.php
REDIS_CLIENT=phpredis
REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=6379
REDIS_DB=0
REDIS_CACHE_DB=1

CACHE_STORE=redis
SESSION_DRIVER=redis
SESSION_CONNECTION=default
QUEUE_CONNECTION=redis
REDIS_QUEUE_CONNECTION=default
REDIS_QUEUE=high,default,low
REDIS_QUEUE_RETRY_AFTER=90

Why Redis is better here

  • faster cache reads and writes
  • better session performance than file or database sessions
  • better queue performance than database queue
  • lower database load during traffic spikes

Supervisor for queue workers

If your server has terminal access and you can install Supervisor, use Supervisor for RoboExchanger queue workers.

This is the recommended production method.

The scheduler-based queue worker has been removed from:

  • routes/console.php

That means:

  • cron still runs scheduler tasks
  • Supervisor should handle queue:work
  • do not put queue:work --stop-when-empty inside the scheduler again if Supervisor is active

Why Supervisor is better

  • queue workers stay alive
  • failed worker processes restart automatically
  • job processing is faster than minute-by-minute queue runs
  • background jobs are handled more consistently

How to install Supervisor

These commands are standard for Ubuntu or Debian servers:

sudo apt update
sudo apt install supervisor -y
sudo systemctl enable supervisor
sudo systemctl start supervisor

If you use another Linux distribution, install the equivalent supervisor package for that system.

Supervisor configuration file

Create this file:

/etc/supervisor/conf.d/roboexchanger-worker.conf

Example config:

[program:roboexchanger-worker]
process_name=%(program_name)s_%(process_num)02d
command=/usr/bin/php /var/www/roboexchanger/artisan queue:work redis --queue=high,default,low --sleep=1 --tries=3 --timeout=60 --max-time=3600
directory=/var/www/roboexchanger
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
user=www-data
numprocs=2
redirect_stderr=true
stdout_logfile=/var/www/roboexchanger/storage/logs/worker.log
stopwaitsecs=3600

Change these values before use

FieldWhat to change
commandUse the real PHP binary path and project path
directoryUse your real project root
userUse your real web server user such as www-data, nginx, or another deploy user
numprocsMatch your server size
stdout_logfileUse a log path your server can write

Good numprocs starting points

Server sizeGood starting numprocs
2 CPU cores2
4 CPU cores4
8 CPU cores4 to 8 depending on job load

Start and check Supervisor

After saving the config:

sudo supervisorctl reread
sudo supervisorctl update
sudo supervisorctl start "roboexchanger-worker:*"
sudo supervisorctl status

Useful later commands:

sudo supervisorctl restart "roboexchanger-worker:*"
sudo supervisorctl stop "roboexchanger-worker:*"
sudo supervisorctl status

Cron is still required

Even when you use Supervisor, cron is still required for scheduler tasks.

Use this cron job:

* * * * * php /full-path-to-your-project/artisan schedule:run >> /dev/null 2>&1

The scheduler is still needed for tasks such as:

  • timeout exchange updates
  • sitemap generation
  • reserve notification email
  • log cleanup
  • license refresh

For live production, s3 storage is recommended over local disk.

The project already supports the s3 filesystem driver in:

  • config/filesystems.php

Best storage choices

  • AWS S3: best managed commercial option
  • MinIO: good free self-hosted option
  • RustFS: good S3-compatible option and already compatible with Laravel S3 driver

Why external object storage is better

  • easier file scaling
  • better for multiple app servers
  • less local disk pressure from uploads
  • easier backup and CDN integration

Very important note for MinIO and RustFS

If you use MinIO or RustFS for RoboExchanger uploads, the bucket should be public for normal frontend file access.

That is the simplest and most compatible setup for:

  • KYC images
  • blog images
  • logos
  • partner images
  • uploaded proofs
  • other public-facing media

If the bucket is private but the app is expecting normal public URLs, media may not load correctly on the website.

AWS S3 example

FILESYSTEM_DISK=s3

AWS_ACCESS_KEY_ID=your-key
AWS_SECRET_ACCESS_KEY=your-secret
AWS_DEFAULT_REGION=ap-southeast-1
AWS_BUCKET=roboexchanger
AWS_USE_PATH_STYLE_ENDPOINT=false
AWS_ENDPOINT=
AWS_URL=

MinIO or RustFS example

FILESYSTEM_DISK=s3

AWS_ACCESS_KEY_ID=your-access-key
AWS_SECRET_ACCESS_KEY=your-secret-key
AWS_DEFAULT_REGION=us-east-1
AWS_BUCKET=roboexchanger
AWS_ENDPOINT=https://storage.yourdomain.com
AWS_URL=https://storage.yourdomain.com/roboexchanger
AWS_USE_PATH_STYLE_ENDPOINT=true

Important setup notes for MinIO or RustFS

  • create the bucket first
  • make the bucket public
  • use the correct endpoint
  • use AWS_USE_PATH_STYLE_ENDPOINT=true when your provider needs path-style URLs
  • test real uploads after saving the storage settings

Related doc:

Laravel optimize commands

For production deployment, use optimized autoloading and Laravel cache builds.

Composer command

composer install --no-dev --optimize-autoloader

Laravel optimize command

php artisan optimize

If you need to clear compiled caches after a configuration change:

php artisan optimize:clear

I verified in this codebase that route, config, and view caching complete successfully.

PHP and web server recommendations

PHP

  • use PHP 8.3+
  • enable OPcache
  • keep APP_DEBUG=false
  • use production php.ini values

Web server

  • use Nginx + PHP-FPM if possible
  • point the domain to the public directory only
  • serve over HTTPS
  • enable gzip or Brotli if your stack supports it

Database recommendations

  • use SSD storage
  • do not overload the same small server with database, Redis, mail, and heavy web traffic if your order volume grows
  • keep regular backups
  • monitor slow queries if the website becomes large

Best production checklist

Before going live, the best practical checklist is:

  1. set APP_ENV=production
  2. set APP_DEBUG=false
  3. use Redis for cache, session, and queue
  4. use Supervisor for queue workers
  5. keep cron for scheduler only
  6. use s3 storage
  7. if using MinIO or RustFS, create a public bucket
  8. run composer install --no-dev --optimize-autoloader
  9. run php artisan optimize
  10. run one full real exchange test
  11. test SMTP, Telegram, and SMS if used
  12. confirm uploaded media loads correctly

Official references

These official Laravel docs are useful for deeper technical reading: