System

Change Log

Audit trail of every save, delete, import, and workflow update across the backend.

Total Changes1,029
Today49
Latest 2026-05-09 20:12
Reset

1,029 results · Page 21 of 21

square_terminal · refresh_code
2026-04-07 12:09:12 · anonymous
backend_square_terminal_code #5
Context
[]
Before
{"backend_square_terminal_code_id":"1","square_device_code_id":"5JVYM58J39RMV","code_value":"NJQDJK","device_name":"Front Desk Terminal","product_type":"TERMINAL_API","code_status":"EXPIRED","location_id":"5W9V4C95PWDVQ","pair_by":null,"paired_at":"1970-01-01 00:00:00","raw_json":"{\"id\":\"5JVYM58J39RMV\",\"name\":\"Front Desk Terminal\",\"code\":\"NJQDJK\",\"product_type\":\"TERMINAL_API\",\"location_id\":\"5W9V4C95PWDVQ\",\"created_at\":\"2026-04-07T16:00:25.000Z\",\"status\":\"EXPIRED\",\"status_changed_at\":\"1970-01-01T00:00:00.000Z\"}","created_at":"2026-04-07 12:00:24","updated_at":"2026-04-07 12:09:07"}
Changed Data
{"backend_square_terminal_code_id":"1","square_device_code_id":"5JVYM58J39RMV","code_value":"NJQDJK","device_name":"Front Desk Terminal","product_type":"TERMINAL_API","code_status":"EXPIRED","location_id":"5W9V4C95PWDVQ","pair_by":null,"paired_at":"1970-01-01 00:00:00","raw_json":"{\"id\":\"5JVYM58J39RMV\",\"name\":\"Front Desk Terminal\",\"code\":\"NJQDJK\",\"product_type\":\"TERMINAL_API\",\"location_id\":\"5W9V4C95PWDVQ\",\"created_at\":\"2026-04-07T16:00:25.000Z\",\"status\":\"EXPIRED\",\"status_changed_at\":\"1970-01-01T00:00:00.000Z\"}","created_at":"2026-04-07 12:00:24","updated_at":"2026-04-07 12:09:12"}
square_terminal · refresh_code
2026-04-07 12:09:07 · anonymous
backend_square_terminal_code #5
Context
[]
Before
{"backend_square_terminal_code_id":"1","square_device_code_id":"5JVYM58J39RMV","code_value":"NJQDJK","device_name":"Front Desk Terminal","product_type":"TERMINAL_API","code_status":"EXPIRED","location_id":"5W9V4C95PWDVQ","pair_by":null,"paired_at":"1970-01-01 00:00:00","raw_json":"{\"id\":\"5JVYM58J39RMV\",\"name\":\"Front Desk Terminal\",\"code\":\"NJQDJK\",\"product_type\":\"TERMINAL_API\",\"location_id\":\"5W9V4C95PWDVQ\",\"created_at\":\"2026-04-07T16:00:25.000Z\",\"status\":\"EXPIRED\",\"status_changed_at\":\"1970-01-01T00:00:00.000Z\"}","created_at":"2026-04-07 12:00:24","updated_at":"2026-04-07 12:09:01"}
Changed Data
{"backend_square_terminal_code_id":"1","square_device_code_id":"5JVYM58J39RMV","code_value":"NJQDJK","device_name":"Front Desk Terminal","product_type":"TERMINAL_API","code_status":"EXPIRED","location_id":"5W9V4C95PWDVQ","pair_by":null,"paired_at":"1970-01-01 00:00:00","raw_json":"{\"id\":\"5JVYM58J39RMV\",\"name\":\"Front Desk Terminal\",\"code\":\"NJQDJK\",\"product_type\":\"TERMINAL_API\",\"location_id\":\"5W9V4C95PWDVQ\",\"created_at\":\"2026-04-07T16:00:25.000Z\",\"status\":\"EXPIRED\",\"status_changed_at\":\"1970-01-01T00:00:00.000Z\"}","created_at":"2026-04-07 12:00:24","updated_at":"2026-04-07 12:09:07"}
square_terminal · refresh_code
2026-04-07 12:09:01 · anonymous
backend_square_terminal_code #5
Context
[]
Before
{"backend_square_terminal_code_id":"1","square_device_code_id":"5JVYM58J39RMV","code_value":"NJQDJK","device_name":"Front Desk Terminal","product_type":"TERMINAL_API","code_status":"EXPIRED","location_id":"5W9V4C95PWDVQ","pair_by":null,"paired_at":"1970-01-01 00:00:00","raw_json":"{\"id\":\"5JVYM58J39RMV\",\"name\":\"Front Desk Terminal\",\"code\":\"NJQDJK\",\"product_type\":\"TERMINAL_API\",\"location_id\":\"5W9V4C95PWDVQ\",\"created_at\":\"2026-04-07T16:00:25.000Z\",\"status\":\"EXPIRED\",\"status_changed_at\":\"1970-01-01T00:00:00.000Z\"}","created_at":"2026-04-07 12:00:24","updated_at":"2026-04-07 12:08:58"}
Changed Data
{"backend_square_terminal_code_id":"1","square_device_code_id":"5JVYM58J39RMV","code_value":"NJQDJK","device_name":"Front Desk Terminal","product_type":"TERMINAL_API","code_status":"EXPIRED","location_id":"5W9V4C95PWDVQ","pair_by":null,"paired_at":"1970-01-01 00:00:00","raw_json":"{\"id\":\"5JVYM58J39RMV\",\"name\":\"Front Desk Terminal\",\"code\":\"NJQDJK\",\"product_type\":\"TERMINAL_API\",\"location_id\":\"5W9V4C95PWDVQ\",\"created_at\":\"2026-04-07T16:00:25.000Z\",\"status\":\"EXPIRED\",\"status_changed_at\":\"1970-01-01T00:00:00.000Z\"}","created_at":"2026-04-07 12:00:24","updated_at":"2026-04-07 12:09:01"}
square_terminal · refresh_code
2026-04-07 12:08:58 · anonymous
backend_square_terminal_code #5
Context
[]
Before
{"backend_square_terminal_code_id":"1","square_device_code_id":"5JVYM58J39RMV","code_value":"NJQDJK","device_name":"Front Desk Terminal","product_type":"TERMINAL_API","code_status":"EXPIRED","location_id":"5W9V4C95PWDVQ","pair_by":null,"paired_at":"1970-01-01 00:00:00","raw_json":"{\"id\":\"5JVYM58J39RMV\",\"name\":\"Front Desk Terminal\",\"code\":\"NJQDJK\",\"product_type\":\"TERMINAL_API\",\"location_id\":\"5W9V4C95PWDVQ\",\"created_at\":\"2026-04-07T16:00:25.000Z\",\"status\":\"EXPIRED\",\"status_changed_at\":\"1970-01-01T00:00:00.000Z\"}","created_at":"2026-04-07 12:00:24","updated_at":"2026-04-07 12:08:54"}
Changed Data
{"backend_square_terminal_code_id":"1","square_device_code_id":"5JVYM58J39RMV","code_value":"NJQDJK","device_name":"Front Desk Terminal","product_type":"TERMINAL_API","code_status":"EXPIRED","location_id":"5W9V4C95PWDVQ","pair_by":null,"paired_at":"1970-01-01 00:00:00","raw_json":"{\"id\":\"5JVYM58J39RMV\",\"name\":\"Front Desk Terminal\",\"code\":\"NJQDJK\",\"product_type\":\"TERMINAL_API\",\"location_id\":\"5W9V4C95PWDVQ\",\"created_at\":\"2026-04-07T16:00:25.000Z\",\"status\":\"EXPIRED\",\"status_changed_at\":\"1970-01-01T00:00:00.000Z\"}","created_at":"2026-04-07 12:00:24","updated_at":"2026-04-07 12:08:58"}
square_terminal · refresh_code
2026-04-07 12:08:54 · anonymous
backend_square_terminal_code #5
Context
[]
Before
{"backend_square_terminal_code_id":"1","square_device_code_id":"5JVYM58J39RMV","code_value":"NJQDJK","device_name":"Front Desk Terminal","product_type":"TERMINAL_API","code_status":"EXPIRED","location_id":"5W9V4C95PWDVQ","pair_by":null,"paired_at":"1970-01-01 00:00:00","raw_json":"{\"id\":\"5JVYM58J39RMV\",\"name\":\"Front Desk Terminal\",\"code\":\"NJQDJK\",\"product_type\":\"TERMINAL_API\",\"location_id\":\"5W9V4C95PWDVQ\",\"created_at\":\"2026-04-07T16:00:25.000Z\",\"status\":\"EXPIRED\",\"status_changed_at\":\"1970-01-01T00:00:00.000Z\"}","created_at":"2026-04-07 12:00:24","updated_at":"2026-04-07 12:08:37"}
Changed Data
{"backend_square_terminal_code_id":"1","square_device_code_id":"5JVYM58J39RMV","code_value":"NJQDJK","device_name":"Front Desk Terminal","product_type":"TERMINAL_API","code_status":"EXPIRED","location_id":"5W9V4C95PWDVQ","pair_by":null,"paired_at":"1970-01-01 00:00:00","raw_json":"{\"id\":\"5JVYM58J39RMV\",\"name\":\"Front Desk Terminal\",\"code\":\"NJQDJK\",\"product_type\":\"TERMINAL_API\",\"location_id\":\"5W9V4C95PWDVQ\",\"created_at\":\"2026-04-07T16:00:25.000Z\",\"status\":\"EXPIRED\",\"status_changed_at\":\"1970-01-01T00:00:00.000Z\"}","created_at":"2026-04-07 12:00:24","updated_at":"2026-04-07 12:08:54"}
square_terminal · refresh_code
2026-04-07 12:08:37 · anonymous
backend_square_terminal_code #5
Context
[]
Before
{"backend_square_terminal_code_id":"1","square_device_code_id":"5JVYM58J39RMV","code_value":"NJQDJK","device_name":"Front Desk Terminal","product_type":"TERMINAL_API","code_status":"UNPAIRED","location_id":"5W9V4C95PWDVQ","pair_by":"2026-04-07 16:05:24","paired_at":"2026-04-07 16:00:24","raw_json":"{\"id\":\"5JVYM58J39RMV\",\"name\":\"Front Desk Terminal\",\"code\":\"NJQDJK\",\"product_type\":\"TERMINAL_API\",\"location_id\":\"5W9V4C95PWDVQ\",\"pair_by\":\"2026-04-07T16:05:24.000Z\",\"created_at\":\"2026-04-07T16:00:24.731Z\",\"status\":\"UNPAIRED\",\"status_changed_at\":\"2026-04-07T16:00:24.000Z\"}","created_at":"2026-04-07 12:00:24","updated_at":"2026-04-07 12:00:24"}
Changed Data
{"backend_square_terminal_code_id":"1","square_device_code_id":"5JVYM58J39RMV","code_value":"NJQDJK","device_name":"Front Desk Terminal","product_type":"TERMINAL_API","code_status":"EXPIRED","location_id":"5W9V4C95PWDVQ","pair_by":null,"paired_at":"1970-01-01 00:00:00","raw_json":"{\"id\":\"5JVYM58J39RMV\",\"name\":\"Front Desk Terminal\",\"code\":\"NJQDJK\",\"product_type\":\"TERMINAL_API\",\"location_id\":\"5W9V4C95PWDVQ\",\"created_at\":\"2026-04-07T16:00:25.000Z\",\"status\":\"EXPIRED\",\"status_changed_at\":\"1970-01-01T00:00:00.000Z\"}","created_at":"2026-04-07 12:00:24","updated_at":"2026-04-07 12:08:37"}
square_terminal · create_code
2026-04-07 12:00:24 · anonymous
backend_square_terminal_code #5
Context
[]
Before
[]
Changed Data
{"backend_square_terminal_code_id":"1","square_device_code_id":"5JVYM58J39RMV","code_value":"NJQDJK","device_name":"Front Desk Terminal","product_type":"TERMINAL_API","code_status":"UNPAIRED","location_id":"5W9V4C95PWDVQ","pair_by":"2026-04-07 16:05:24","paired_at":"2026-04-07 16:00:24","raw_json":"{\"id\":\"5JVYM58J39RMV\",\"name\":\"Front Desk Terminal\",\"code\":\"NJQDJK\",\"product_type\":\"TERMINAL_API\",\"location_id\":\"5W9V4C95PWDVQ\",\"pair_by\":\"2026-04-07T16:05:24.000Z\",\"created_at\":\"2026-04-07T16:00:24.731Z\",\"status\":\"UNPAIRED\",\"status_changed_at\":\"2026-04-07T16:00:24.000Z\"}","created_at":"2026-04-07 12:00:24","updated_at":"2026-04-07 12:00:24"}
documents · upload
2026-04-07 11:53:37 · anonymous
backend_document #9
Context
{"file_name":"README-libreoffice-conversion.md","mime_type":"application/octet-stream"}
Before
[]
Changed Data
{"backend_document_id":"9","document_type":"upload","title":"Copilot - document changes","slug":"copilot-document-changes","summary_text":"Copilot - document changes to this module","content_markdown":null,"content_html":null,"file_name":"README-libreoffice-conversion.md","stored_name":"20260407-155337-5741b116.md","mime_type":"application/octet-stream","file_size_bytes":"10541","storage_path":"/mnt/drive1/customerdb/backend/documents_storage/20260407-155337-5741b116.md","is_deleted":"0","created_at":"2026-04-07 11:53:37","updated_at":"2026-04-07 11:53:37","editor_content":"# LibreOffice Conversion & Editor Integration\r\n\r\nThis document describes the features added to support server-side LibreOffice conversions, the asynchronous conversion queue and worker, frontend changes, and deployment/run instructions.\r\n\r\nIMPORTANT: This document assumes the project root is the repo root (e.g. `C:\\code\\customerdb`) and paths below are relative to it.\r\n\r\n---\r\n\r\n## Overview\r\n\r\nSummary of what was added or changed:\r\n\r\n- Frontend/editor\r\n  - `backend/documents.php`\r\n    - Replaced simple markdown renderer with `marked.js` and added Prism highlighting.\r\n    - Toolbar: Save button, view toggles, fullscreen, color highlight swatches that wrap selected text in a `<span class=\"hl\" style=\"background:...\">`.\r\n    - Files that can be edited inline show an **Edit** button and load into the editor; non-inline files show **Open** (open stored file in new tab).\r\n    - When a file is non-inline and conversion is pending, the preview shows a conversion status block and polls the API until `content_html` becomes available.\r\n\r\n- Document extraction & conversion helpers\r\n  - `backend/document_service.php`\r\n    - Added `app_document_can_edit_inline()` to detect files editable directly in the textarea.\r\n    - Implemented lightweight extraction for `docx`, `odt`, `rtf`, `pptx`, `xlsx` where possible (Zip-based parsing for Office formats).\r\n    - Added LibreOffice helpers: `app_libreoffice_find_command()`, `app_libreoffice_convert_to_html()`, `app_libreoffice_convert_to_text()`.\r\n    - Added queuing hook `app_conversion_enqueue()` to create file-based jobs and insert DB job rows.\r\n\r\n- Asynchronous conversion queue and worker\r\n  - Job queue storage: `backend/documents_storage/convert_queue/job-*.json`.\r\n  - DB tracking: table `backend_conversion_job` to track job lifecycle (`queued`, `processing`, `done`, `error`, `cancelled`).\r\n  - Worker script: `backend/bin/convert_worker.php` ? processes job files, runs LibreOffice conversions, saves results to `backend_document.content_html` and updates job status in DB.\r\n  - Example systemd unit: `backend/systemd/convert-worker.service` (adjust paths/user before enabling).\r\n\r\n- API to manage jobs\r\n  - `backend/api/conversion_jobs.php` ? list jobs and `POST` actions: `retry`, `cancel`.\r\n\r\n- Migration and helper tooling\r\n  - Migration SQL: `migrations/20260407_add_conversion_job.sql` ? creates job table and ensures `content_html` on `backend_document`.\r\n  - Migration runner: `backend/bin/run_migration.php` ? executes the migration SQL using the app DB connection.\r\n\r\n---\r\n\r\n## File map (key files)\r\n\r\n- `backend/documents.php` ? Editor UI, preview handling, polling, highlight UI.\r\n- `backend/document_service.php` ? Extraction/queue helpers and LibreOffice integration.\r\n- `backend/api/documents.php` ? Document detail API used by frontend.\r\n- `backend/api/conversion_jobs.php` ? New API to list/manage conversion jobs.\r\n- `backend/bin/convert_worker.php` ? Worker that runs LibreOffice and updates DB.\r\n- `backend/bin/run_migration.php` ? Helper to run SQL migration with application DB config.\r\n- `migrations/20260407_add_conversion_job.sql` ? SQL migration file to apply.\r\n- `backend/systemd/convert-worker.service` ? example systemd unit to run worker as daemon.\r\n\r\n---\r\n\r\n## Database changes\r\n\r\nThe migration will:\r\n\r\n1. Create table `backend_conversion_job` with columns:\r\n   - `conversion_job_id` (PK)\r\n   - `backend_document_id` (FK-ish)\r\n   - `status` (VARCHAR) values: `queued`, `processing`, `done`, `error`, `cancelled`\r\n   - `storage_path`, `output_path`, `error_text`, `created_at`, `updated_at`\r\n\r\n2. Ensure `backend_document.content_html` LONGTEXT column exists (used for serving converted HTML previews).\r\n\r\nMigration file: `migrations/20260407_add_conversion_job.sql`\r\n\r\n### Apply migration\r\n\r\nTwo options:\r\n\r\n- Run via the application DB connection (uses `app_db()` settings in `bootstrap.php`):\r\n\r\n  - From repository root:\r\n\r\n    ```bash\r\n    php backend/bin/run_migration.php\r\n    ```\r\n\r\n  - This script reads `migrations/20260407_add_conversion_job.sql` and executes the statements.\r\n\r\n- Run directly with MariaDB client (replace placeholders):\r\n\r\n  ```bash\r\n  mysql -u <db_user> -p ellas_alterations < migrations/20260407_add_conversion_job.sql\r\n  ```\r\n\r\nNote: The app expects the `ellas_alterations` database (the DB name you mentioned). Ensure the DB user has `CREATE/ALTER` privileges.\r\n\r\n---\r\n\r\n## Server prerequisites\r\n\r\n- LibreOffice must be installed and available on PATH as `libreoffice` or `soffice`.\r\n  - Debian/Ubuntu example: `sudo apt update && sudo apt install -y libreoffice`\r\n- PHP must have `exec()` available for the worker process (worker uses `exec()` to invoke LibreOffice).\r\n- PHP `ZipArchive` extension (`php-zip`) is recommended for better extraction of docx/odt/pptx/xlsx.\r\n- The web/worker user must have read access to the stored files and write access to:\r\n  - `backend/documents_storage/convert_queue` (job files)\r\n- Prevent running conversions as `root` in production ? configure a dedicated unprivileged user.\r\n\r\nSecurity note: Converting user-uploaded files can be risky. Run the worker as a dedicated, restricted user, and consider sandboxing or containerization.\r\n\r\n---\r\n\r\n## Worker (how it runs)\r\n\r\n- Manual run (process queued jobs once):\r\n\r\n  ```bash\r\n  php backend/bin/convert_worker.php\r\n  ```\r\n\r\n- Run continuously for development/testing:\r\n\r\n  ```bash\r\n  while true; do php backend/bin/convert_worker.php; sleep 3; done\r\n  ```\r\n\r\n- Systemd service (example):\r\n  - Copy `backend/systemd/convert-worker.service` to `/etc/systemd/system/convert-worker.service` and update `WorkingDirectory` and `ExecStart` to match your deployment path.\r\n  - Enable and start:\r\n\r\n    ```bash\r\n    sudo systemctl daemon-reload\r\n    sudo systemctl enable --now convert-worker.service\r\n    sudo systemctl status convert-worker.service\r\n    ```\r\n\r\nThe worker behavior:\r\n- Scans `backend/documents_storage/convert_queue/` for `job-*.json` files.\r\n- Marks DB job `status = processing` and updates the JSON job file with a timestamp.\r\n- Runs LibreOffice to convert the source file to HTML. If that fails, it attempts to convert to plain text.\r\n- On success, writes converted HTML (or text wrapped as HTML) into `backend_document.content_html` and updates job row `status = done` and `output_path`.\r\n- On failure, updates job row `status = error` and stores `error_text`.\r\n- Removes the job file when finished.\r\n\r\nNote: The worker updates DB rows using the same DB configuration as the app.\r\n\r\n---\r\n\r\n## API: managing conversion jobs\r\n\r\n- List jobs\r\n  - `GET /backend/api/conversion_jobs.php`\r\n  - Returns recent job rows (limited to 200)\r\n\r\n- Retry a job\r\n  - `POST /backend/api/conversion_jobs.php` with JSON body `{ \"action\": \"retry\", \"conversion_job_id\": <id> }`\r\n  - Re-enqueues a job and marks DB row `status = 'queued'`.\r\n\r\n- Cancel a job\r\n  - `POST /backend/api/conversion_jobs.php` with JSON body `{ \"action\": \"cancel\", \"conversion_job_id\": <id> }`\r\n\r\nThe endpoint returns JSON success or error messages.\r\n\r\n---\r\n\r\n## Frontend behavior & UX\r\n\r\n- When selecting a document in the backend Documents UI:\r\n  - If `document.can_edit_inline` is true, the **Edit** button loads the document into the textarea editor.\r\n  - If not editable inline:\r\n    - If `document.content_html` exists, the preview renders the converted HTML (Prism highlighting applied).\r\n    - If `content_html` is missing, the preview shows a **Conversion pending...** panel with a placeholder progress bar and `convertStatus` text ? the page polls the documents API until `content_html` becomes available.\r\n\r\n- Save / highlights\r\n  - Toolbar Save button submits the same form as the Save Document button.\r\n  - Highlight swatches wrap selected text with `<span class=\"hl\" style=\"background:...\">` which renders in the preview. Highlights will be saved inside `content_markdown` (persisted as HTML markup in the markdown content).\r\n\r\n---\r\n\r\n## How to test end-to-end\r\n\r\n1. Ensure migration applied and DB reachable.\r\n2. Ensure LibreOffice is installed and the worker can run `exec()`.\r\n3. Upload a `.docx` / `.pptx` / `.odt` file using the Upload form in the backend.\r\n4. Check that a JSON job file appears in `backend/documents_storage/convert_queue/` and a row is created in `backend_conversion_job` with `status = queued`.\r\n5. Run the worker manually: `php backend/bin/convert_worker.php`.\r\n6. Worker should convert and update `backend_document.content_html` and set job `status = done`.\r\n7. In the Documents page, select the document: the preview should update to show converted HTML (or show Conversion pending until conversion completes).\r\n\r\n---\r\n\r\n## Troubleshooting\r\n\r\n- No jobs processed:\r\n  - Ensure worker runs under an account that can read job files and run `libreoffice`.\r\n  - Confirm `libreoffice` is on PATH and `php -r \"echo shell_exec('command -v libreoffice');\"` returns a path.\r\n  - Check permissions of `backend/documents_storage/convert_queue/`.\r\n\r\n- Conversion fails / `status = error`:\r\n  - Inspect `backend_conversion_job.error_text` for details.\r\n  - Try running `php backend/bin/convert_worker.php` manually and capture output.\r\n  - Confirm LibreOffice works on the file from the worker user: `libreoffice --headless --convert-to html --outdir /tmp /path/to/file.docx`.\r\n\r\n- HTML content looks malformed or contains unsafe markup:\r\n  - Sanitize HTML before serving (recommended). Consider `HTMLPurifier` or a whitelist approach.\r\n\r\n---\r\n\r\n## Recommended next improvements (future work)\r\n\r\n- Migrate the queue to a DB-backed atomic queue (use `backend_conversion_job` rows as the primary queue source) so workers select and lock rows (avoid file races).\r\n- Sanitize `content_html` before storing/serving.\r\n- Add a small admin page to list conversion jobs and perform retry/cancel from the UI (calls `backend/api/conversion_jobs.php`).\r\n- Add metrics & logging for conversion durations and failure counts.\r\n- Implement a retention policy / cleanup job for old job records and temp files.\r\n\r\n---\r\n\r\n## Contact / follow-up\r\n\r\nTell me which of the following you want next and I will implement it:\r\n\r\n- Convert the file-based queue to a DB-atomic queue (worker picks rows with `status='queued'` using an atomic update), and remove JSON job files.\r\n- Add the admin UI to manage conversion jobs inside the backend.\r\n- Implement HTML sanitization of `content_html` prior to storage.\r\n- Update the systemd unit to match the exact server paths & user if you give me that information.\r\n\r\n\r\n\r\n---\r\n\r\n*Document generated by GitHub Copilot.*\r\n","is_text_editable":1,"can_edit_inline":1}
documents · save
2026-04-07 10:16:12 · anonymous
backend_document #2
Context
{"document_type":"markdown"}
Before
{"backend_document_id":"2","document_type":"markdown","title":"Test Note","slug":"test-note","summary_text":"This is test note summary","content_markdown":"# test\r\n\r\n## second line\r\n\r\n---\r\n\r\n- list\r\n- sls\r\n- sdfas'\r\n- asdasdf","content_html":"<h1>test</h1>\n<h2>second line</h2>\n<p>---</p>\n<ul>\n<li>list</li>\n<li>sls</li>\n<li>sdfas&#039;</li>\n<li>asdasdf</li>\n</ul>","file_name":null,"stored_name":null,"mime_type":null,"file_size_bytes":null,"storage_path":null,"is_deleted":"0","created_at":"2026-04-02 15:00:01","updated_at":"2026-04-03 08:04:41","editor_content":"# test\r\n\r\n## second line\r\n\r\n---\r\n\r\n- list\r\n- sls\r\n- sdfas'\r\n- asdasdf","is_text_editable":1}
Changed Data
{"backend_document_id":"2","document_type":"markdown","title":"Test Note","slug":"test-note","summary_text":"This is test note summary","content_markdown":"# test\r\n\r\n## second line\r\n\r\n---\r\n\r\n- list\r\n- sls\r\n- sdfas'\r\n- asdasdf","content_html":"<h1>test</h1>\n<h2>second line</h2>\n<p>---</p>\n<ul>\n<li>list</li>\n<li>sls</li>\n<li>sdfas&#039;</li>\n<li>asdasdf</li>\n</ul>","file_name":null,"stored_name":null,"mime_type":null,"file_size_bytes":null,"storage_path":null,"is_deleted":"0","created_at":"2026-04-02 15:00:01","updated_at":"2026-04-07 10:16:12","editor_content":"# test\r\n\r\n## second line\r\n\r\n---\r\n\r\n- list\r\n- sls\r\n- sdfas'\r\n- asdasdf","is_text_editable":1}
documents · upload
2026-04-06 15:55:18 · anonymous
backend_document #8
Context
{"file_name":"SESSION_LOG_2026-04-06.md","mime_type":"application/octet-stream"}
Before
[]
Changed Data
{"backend_document_id":"8","document_type":"upload","title":"Codex 04-06-2026","slug":"codex-04-06-2026","summary_text":"Codex 04-06-2026 Completed and Documented changes SESSION_LOG_2026-04-06.md","content_markdown":null,"content_html":null,"file_name":"SESSION_LOG_2026-04-06.md","stored_name":"20260406-195518-95396ba2.md","mime_type":"application/octet-stream","file_size_bytes":"23912","storage_path":"/mnt/drive1/customerdb/backend/documents_storage/20260406-195518-95396ba2.md","is_deleted":"0","created_at":"2026-04-06 15:55:18","updated_at":"2026-04-06 15:55:18"}
documents · upload
2026-04-06 15:54:18 · anonymous
backend_document #7
Context
{"file_name":"Claude-04-06-2026-updated Documentation.odt","mime_type":"application/vnd.oasis.opendocument.text"}
Before
[]
Changed Data
{"backend_document_id":"7","document_type":"upload","title":"Claude 04-06-2026","slug":"claude-04-06-2026","summary_text":"Claude 04-06-2026 Completed and Updated Documents","content_markdown":null,"content_html":null,"file_name":"Claude-04-06-2026-updated Documentation.odt","stored_name":"20260406-195418-e59c0ad8.odt","mime_type":"application/vnd.oasis.opendocument.text","file_size_bytes":"38724","storage_path":"/mnt/drive1/customerdb/backend/documents_storage/20260406-195418-e59c0ad8.odt","is_deleted":"0","created_at":"2026-04-06 15:54:18","updated_at":"2026-04-06 15:54:18"}
speed_dial · create
2026-04-06 10:36:14 · anonymous
backend_speed_dial #3
Context
[]
Before
[]
Changed Data
{"backend_speed_dial_id":"3","link_name":"Google Rich Text","link_url":"https://search.google.com/test/rich-results/result/r%2Forganization-info?id=EMIaMZZG3Dxsc6hD-E9UEw","sort_order":"0","is_active":"1","created_at":"2026-04-06 10:36:14","updated_at":"2026-04-06 10:36:14"}
speed_dial · create
2026-04-06 10:35:29 · anonymous
backend_speed_dial #2
Context
[]
Before
[]
Changed Data
{"backend_speed_dial_id":"2","link_name":"PostIz","link_url":"http://192.168.7.206:5009/auth","sort_order":"0","is_active":"1","created_at":"2026-04-06 10:35:29","updated_at":"2026-04-06 10:35:29"}
speed_dial · create
2026-04-06 10:35:01 · anonymous
backend_speed_dial #1
Context
[]
Before
[]
Changed Data
{"backend_speed_dial_id":"1","link_name":"MariaDB Portail","link_url":"http://192.168.7.202:3456/","sort_order":"0","is_active":"1","created_at":"2026-04-06 10:35:01","updated_at":"2026-04-06 10:35:01"}
schedule · create
2026-04-05 16:42:13 · admin@ellasalterations.com
appointment_workflow #0
Context
{"workflow_status":"no_show","visit_id":129010,"order_id":103763}
Before
[]
Changed Data
{"workflow_id":"4","source":"visit","source_key":"129010","appointment_date":"2026-04-05","customer_id":"3462","visit_id":"129010","order_id":null,"workflow_status":"no_show","payment_state":null,"payment_method":null,"payment_amount":"0.00","paid_in_advance":"0","report_required":"0","report_attached":"0","report_checked_at":"2026-04-05 16:42:13","provider_notes_text":null,"operator_notes":null,"no_show_email_key":"none","created_at":"2026-04-05 16:42:13","updated_at":"2026-04-05 16:42:13"}
documents · save
2026-04-05 16:01:53 · anonymous
backend_document #6
Context
{"document_type":"markdown"}
Before
{"backend_document_id":"6","document_type":"upload","title":"Claude 04-05-2026 Changes","slug":"claude-04-05-2026-changes","summary_text":"Claude 04-05-2026 Changes","content_markdown":"ok ssh kefa@192.168.7.202 my MariaDB ellas_alterations i need a way to back this data up multiple times a day what are my options so we always have a fresh backup of data and not lose any data\r\n\r\nI’m checking the current MariaDB backup options so I can give you something solid, not hand waving. Then I’ll turn that into a setup that fits your 192.168.7.202 box and your “multiple backups a day, fresh data, don’t lose work” goal.\r\n\r\nThought for 11s\r\n\r\nYes. You absolutely can set this up so you are not relying on one lonely nightly dump and hoping for the best.\r\n\r\nThe direct answer is this:\r\n\r\nThe safest practical setup for your MariaDB on 192.168.7.202 is a layered backup plan:\r\n\r\nBinary logs turned on all the time\r\nA full physical backup every night with mariadb-backup\r\nIncremental backups several times a day\r\nA separate logical dump once or twice a day\r\nCopies of backups pushed to another drive or another machine\r\nRegular restore tests\r\n\r\nThat gives you fresh backups, fast recovery, and a way to restore to a specific point in time instead of just “whatever the last dump was.” MariaDB’s own docs distinguish logical backups from physical backups, recommend mariadb-backup for production style physical backups, and explain that point in time recovery uses a restored backup plus binary logs. The binary log records database changes, which is exactly what you want if your goal is to avoid losing recent data.\r\n\r\nYour real options\r\nOption 1. mariadb-dump\r\n\r\nThis makes SQL dump files.\r\n\r\nGood\r\n\r\nSimple\r\nEasy to restore a whole database or single tables\r\nNice extra safety copy\r\n\r\nBad\r\n\r\nSlower on big databases\r\nRestore takes longer\r\nNot ideal as your only “fresh all day” backup method\r\n\r\nMariaDB documents mariadb-dump as the logical backup tool, and logical backups recreate data using SQL statements like CREATE TABLE and INSERT.\r\n\r\nOption 2. mariadb-backup\r\n\r\nThis is the one I would treat as your main production backup.\r\n\r\nGood\r\n\r\nFaster and more serious than plain SQL dumps\r\nDesigned for full and incremental backups\r\nBetter for larger active databases\r\nSupports restore workflows needed for point in time recovery\r\n\r\nBad\r\n\r\nRestore procedure is a little more involved\r\nYou need to prepare and test restores properly\r\n\r\nMariaDB’s docs say mariadb-backup supports full and incremental backups, and that backups must be prepared before restore.\r\n\r\nOption 3. Binary logs for point in time recovery\r\n\r\nThis is what closes the “we just lost the last few hours” gap.\r\n\r\nGood\r\n\r\nLets you recover right up to a specific time\r\nBest protection against losing recent inserts, edits, deletes\r\nEssential if you want recovery between scheduled backups\r\n\r\nBad\r\n\r\nNot a complete backup by itself\r\nMust be combined with a full or incremental backup\r\nNeeds storage management so logs do not grow forever\r\n\r\nMariaDB states that point in time recovery restores a base backup first, then replays binary logs with mysqlbinlog, and that the binary log contains records of data changes.\r\n\r\nOption 4. Replication to another MariaDB server\r\n\r\nThis is more about high availability than backup.\r\n\r\nGood\r\n\r\nExtra live copy\r\nFast failover possibilities\r\n\r\nBad\r\n\r\nIf bad data or accidental deletes replicate, you can lose data on both\r\nStill does not replace backups\r\n\r\nI would only add this later if uptime becomes a bigger deal.\r\n\r\nOption 5. Filesystem snapshots\r\n\r\nThis can be excellent if the server storage is on ZFS, LVM snapshots, or similar.\r\n\r\nGood\r\n\r\nFast\r\nGreat for frequent local rollback points\r\n\r\nBad\r\n\r\nDepends on how your storage is built\r\nStill should not be your only backup\r\nSnapshot alone is not the same as off machine backup\r\nWhat I recommend for your server\r\n\r\nFor ellas_alterations, I would use this exact strategy:\r\n\r\nBest setup\r\n\r\nLayer 1\r\nTurn on binary logging and keep enough logs for several days.\r\n\r\nLayer 2\r\nRun one nightly full backup with mariadb-backup.\r\n\r\nLayer 3\r\nRun incremental backups every 4 hours during the day.\r\n\r\nLayer 4\r\nRun a logical SQL dump twice a day as a second safety net.\r\n\r\nLayer 5\r\nImmediately copy backups to a second location, ideally:\r\n\r\nanother drive like /mnt/drive1 or /mnt/drive2\r\nand even better, another machine on your network\r\n\r\nLayer 6\r\nDo a test restore regularly, even if only to a temp database or test server.\r\n\r\nThat combo is the sweet spot between safe, practical, and not overly crazy.\r\n\r\nWhy this is the best balance\r\n\r\nHere is the logic:\r\n\r\nmariadb-dump alone is easy but too slow and too coarse if you want frequent fresh recovery points\r\nmariadb-backup gives you strong base and incremental backups\r\nbinary logs fill in the gaps between those backups\r\noff server copies protect you if the whole machine dies\r\nrestore tests make sure your backups are not fake confidence\r\n\r\nThat is the difference between “we have backup files” and “we can actually recover.”\r\n\r\nA practical schedule\r\n\r\nHere is a schedule I would use for you:\r\n\r\nEvery night at 1:00 AM\r\n\r\nfull mariadb-backup\r\n\r\nEvery 4 hours\r\n\r\nincremental mariadb-backup\r\n\r\nEvery 12 hours\r\n\r\nmariadb-dump of ellas_alterations\r\n\r\nEvery hour\r\n\r\nsync backup folder to another disk or server\r\n\r\nBinary logs\r\n\r\nalways on\r\nkeep at least 7 to 14 days, depending on space\r\n\r\nIf your business data changes constantly and you are nervous about same day loss, make the incrementals every 2 hours instead of every 4.\r\n\r\nWhat “not lose any data” really means\r\n\r\nI want to be blunt here.\r\n\r\nYou can get very close to no data loss, but true zero loss is hard unless you move into high availability, synchronous replication, battery backed storage, and more advanced infrastructure.\r\n\r\nFor your setup, the closest practical answer is:\r\n\r\nfull backup nightly\r\nincrementals multiple times a day\r\nbinary logs always on\r\noff machine copies\r\n\r\nThat can reduce your recovery point objective from many hours down to minutes, depending on how current your binlogs and copied backup files are. MariaDB’s point in time recovery model is specifically built for this kind of restoration.\r\n\r\nThe simplest strong version\r\n\r\nIf you want the least complicated version that is still solid, do this:\r\n\r\nEnable binary logs\r\nNightly mariadb-backup full backup\r\nMidday mariadb-dump\r\nRsync everything to another drive and another server\r\nTest restoring once\r\n\r\nThat is already way better than what most small businesses do.\r\n\r\nWhat I would do on your machine\r\n\r\nBecause you already use Linux servers and separate storage, I would set it up like this conceptually:\r\n\r\nMariaDB live data stays where it is now\r\nbackups go to something like /mnt/drive1/mariadb_backups/ellas_alterations/\r\nstructure:\r\nfull/\r\nincremental/\r\nsql/\r\nbinlog_archive/\r\nlogs/\r\n\r\nThen automate with cron or systemd timers.\r\n\r\nStep by step reasoning\r\nYour risk is not just server failure. It is also accidental delete, bad update, corruption, and “oops we changed the wrong record.”\r\nA single daily dump is too wide a gap.\r\nPhysical backups recover faster than pure SQL dumps.\r\nIncrementals reduce backup load while keeping more recent restore points.\r\nBinary logs are the only practical way to replay changes after the last backup and recover to a chosen time.\r\nOff machine copies protect you from total server loss.\r\nRestore testing proves the backup is real.\r\nOther angles you might not have thought about\r\nKeep backups on a different machine\r\n\r\nIf the server dies, local backups on the same disk can die with it.\r\n\r\nKeep one logical dump even if you use physical backups\r\n\r\nSQL dumps are slower, but they are convenient when you need just one database or one table.\r\n\r\nWatch backup size and retention\r\n\r\nYou do not want binlogs filling the drive and choking the server.\r\n\r\nBack up credentials and config too\r\n\r\nNot just the database files.\r\nAlso save:\r\n\r\nMariaDB config\r\nbackup scripts\r\ncron jobs\r\nbackup user credentials stored securely\r\nTest a restore to a second database name\r\n\r\nA backup you have never restored is just a theory.\r\n\r\nMy recommendation in one sentence\r\n\r\nUse mariadb-backup full plus incrementals as your main backup system, keep binary logs enabled for point in time recovery, add a daily mariadb-dump as a second format, and sync all backups off the server automatically.\r\n\r\nAction plan you can apply now\r\nConfirm binary logging is enabled on 192.168.7.202\r\nInstall and use mariadb-backup if it is not already installed\r\nCreate a backup folder on a separate mounted drive\r\nSet nightly full backup\r\nSet 2 hour or 4 hour incremental backups\r\nAdd 1 or 2 daily SQL dumps\r\nSync backup files to another machine or disk\r\nDo one full restore test this week\r\n\r\nI can turn this into an exact setup for your box next, including:\r\n\r\nthe MariaDB config changes\r\nbackup user creation\r\nfolder layout\r\ncron jobs\r\nfull backup script\r\nincremental backup script\r\nSQL dump script\r\nrestore instructions for ellas_alterations","content_html":"<p>ok ssh kefa@192.168.7.202 my MariaDB ellas_alterations i need a way to back this data up multiple times a day what are my options so we always have a fresh backup of data and not lose any data</p>\n<p>I’m checking the current MariaDB backup options so I can give you something solid, not hand waving. Then I’ll turn that into a setup that fits your 192.168.7.202 box and your “multiple backups a day, fresh data, don’t lose work” goal.</p>\n<p>Thought for 11s</p>\n<p>Yes. You absolutely can set this up so you are not relying on one lonely nightly dump and hoping for the best.</p>\n<p>The direct answer is this:</p>\n<p>The safest practical setup for your MariaDB on 192.168.7.202 is a layered backup plan:</p>\n<p>Binary logs turned on all the time</p>\n<p>A full physical backup every night with mariadb-backup</p>\n<p>Incremental backups several times a day</p>\n<p>A separate logical dump once or twice a day</p>\n<p>Copies of backups pushed to another drive or another machine</p>\n<p>Regular restore tests</p>\n<p>That gives you fresh backups, fast recovery, and a way to restore to a specific point in time instead of just “whatever the last dump was.” MariaDB’s own docs distinguish logical backups from physical backups, recommend mariadb-backup for production style physical backups, and explain that point in time recovery uses a restored backup plus binary logs. The binary log records database changes, which is exactly what you want if your goal is to avoid losing recent data.</p>\n<p>Your real options</p>\n<p>Option 1. mariadb-dump</p>\n<p>This makes SQL dump files.</p>\n<p>Good</p>\n<p>Simple</p>\n<p>Easy to restore a whole database or single tables</p>\n<p>Nice extra safety copy</p>\n<p>Bad</p>\n<p>Slower on big databases</p>\n<p>Restore takes longer</p>\n<p>Not ideal as your only “fresh all day” backup method</p>\n<p>MariaDB documents mariadb-dump as the logical backup tool, and logical backups recreate data using SQL statements like CREATE TABLE and INSERT.</p>\n<p>Option 2. mariadb-backup</p>\n<p>This is the one I would treat as your main production backup.</p>\n<p>Good</p>\n<p>Faster and more serious than plain SQL dumps</p>\n<p>Designed for full and incremental backups</p>\n<p>Better for larger active databases</p>\n<p>Supports restore workflows needed for point in time recovery</p>\n<p>Bad</p>\n<p>Restore procedure is a little more involved</p>\n<p>You need to prepare and test restores properly</p>\n<p>MariaDB’s docs say mariadb-backup supports full and incremental backups, and that backups must be prepared before restore.</p>\n<p>Option 3. Binary logs for point in time recovery</p>\n<p>This is what closes the “we just lost the last few hours” gap.</p>\n<p>Good</p>\n<p>Lets you recover right up to a specific time</p>\n<p>Best protection against losing recent inserts, edits, deletes</p>\n<p>Essential if you want recovery between scheduled backups</p>\n<p>Bad</p>\n<p>Not a complete backup by itself</p>\n<p>Must be combined with a full or incremental backup</p>\n<p>Needs storage management so logs do not grow forever</p>\n<p>MariaDB states that point in time recovery restores a base backup first, then replays binary logs with mysqlbinlog, and that the binary log contains records of data changes.</p>\n<p>Option 4. Replication to another MariaDB server</p>\n<p>This is more about high availability than backup.</p>\n<p>Good</p>\n<p>Extra live copy</p>\n<p>Fast failover possibilities</p>\n<p>Bad</p>\n<p>If bad data or accidental deletes replicate, you can lose data on both</p>\n<p>Still does not replace backups</p>\n<p>I would only add this later if uptime becomes a bigger deal.</p>\n<p>Option 5. Filesystem snapshots</p>\n<p>This can be excellent if the server storage is on ZFS, LVM snapshots, or similar.</p>\n<p>Good</p>\n<p>Fast</p>\n<p>Great for frequent local rollback points</p>\n<p>Bad</p>\n<p>Depends on how your storage is built</p>\n<p>Still should not be your only backup</p>\n<p>Snapshot alone is not the same as off machine backup</p>\n<p>What I recommend for your server</p>\n<p>For ellas_alterations, I would use this exact strategy:</p>\n<p>Best setup</p>\n<p>Layer 1</p>\n<p>Turn on binary logging and keep enough logs for several days.</p>\n<p>Layer 2</p>\n<p>Run one nightly full backup with mariadb-backup.</p>\n<p>Layer 3</p>\n<p>Run incremental backups every 4 hours during the day.</p>\n<p>Layer 4</p>\n<p>Run a logical SQL dump twice a day as a second safety net.</p>\n<p>Layer 5</p>\n<p>Immediately copy backups to a second location, ideally:</p>\n<p>another drive like /mnt/drive1 or /mnt/drive2</p>\n<p>and even better, another machine on your network</p>\n<p>Layer 6</p>\n<p>Do a test restore regularly, even if only to a temp database or test server.</p>\n<p>That combo is the sweet spot between safe, practical, and not overly crazy.</p>\n<p>Why this is the best balance</p>\n<p>Here is the logic:</p>\n<p>mariadb-dump alone is easy but too slow and too coarse if you want frequent fresh recovery points</p>\n<p>mariadb-backup gives you strong base and incremental backups</p>\n<p>binary logs fill in the gaps between those backups</p>\n<p>off server copies protect you if the whole machine dies</p>\n<p>restore tests make sure your backups are not fake confidence</p>\n<p>That is the difference between “we have backup files” and “we can actually recover.”</p>\n<p>A practical schedule</p>\n<p>Here is a schedule I would use for you:</p>\n<p>Every night at 1:00 AM</p>\n<p>full mariadb-backup</p>\n<p>Every 4 hours</p>\n<p>incremental mariadb-backup</p>\n<p>Every 12 hours</p>\n<p>mariadb-dump of ellas_alterations</p>\n<p>Every hour</p>\n<p>sync backup folder to another disk or server</p>\n<p>Binary logs</p>\n<p>always on</p>\n<p>keep at least 7 to 14 days, depending on space</p>\n<p>If your business data changes constantly and you are nervous about same day loss, make the incrementals every 2 hours instead of every 4.</p>\n<p>What “not lose any data” really means</p>\n<p>I want to be blunt here.</p>\n<p>You can get very close to no data loss, but true zero loss is hard unless you move into high availability, synchronous replication, battery backed storage, and more advanced infrastructure.</p>\n<p>For your setup, the closest practical answer is:</p>\n<p>full backup nightly</p>\n<p>incrementals multiple times a day</p>\n<p>binary logs always on</p>\n<p>off machine copies</p>\n<p>That can reduce your recovery point objective from many hours down to minutes, depending on how current your binlogs and copied backup files are. MariaDB’s point in time recovery model is specifically built for this kind of restoration.</p>\n<p>The simplest strong version</p>\n<p>If you want the least complicated version that is still solid, do this:</p>\n<p>Enable binary logs</p>\n<p>Nightly mariadb-backup full backup</p>\n<p>Midday mariadb-dump</p>\n<p>Rsync everything to another drive and another server</p>\n<p>Test restoring once</p>\n<p>That is already way better than what most small businesses do.</p>\n<p>What I would do on your machine</p>\n<p>Because you already use Linux servers and separate storage, I would set it up like this conceptually:</p>\n<p>MariaDB live data stays where it is now</p>\n<p>backups go to something like /mnt/drive1/mariadb_backups/ellas_alterations/</p>\n<p>structure:</p>\n<p>full/</p>\n<p>incremental/</p>\n<p>sql/</p>\n<p>binlog_archive/</p>\n<p>logs/</p>\n<p>Then automate with cron or systemd timers.</p>\n<p>Step by step reasoning</p>\n<p>Your risk is not just server failure. It is also accidental delete, bad update, corruption, and “oops we changed the wrong record.”</p>\n<p>A single daily dump is too wide a gap.</p>\n<p>Physical backups recover faster than pure SQL dumps.</p>\n<p>Incrementals reduce backup load while keeping more recent restore points.</p>\n<p>Binary logs are the only practical way to replay changes after the last backup and recover to a chosen time.</p>\n<p>Off machine copies protect you from total server loss.</p>\n<p>Restore testing proves the backup is real.</p>\n<p>Other angles you might not have thought about</p>\n<p>Keep backups on a different machine</p>\n<p>If the server dies, local backups on the same disk can die with it.</p>\n<p>Keep one logical dump even if you use physical backups</p>\n<p>SQL dumps are slower, but they are convenient when you need just one database or one table.</p>\n<p>Watch backup size and retention</p>\n<p>You do not want binlogs filling the drive and choking the server.</p>\n<p>Back up credentials and config too</p>\n<p>Not just the database files.</p>\n<p>Also save:</p>\n<p>MariaDB config</p>\n<p>backup scripts</p>\n<p>cron jobs</p>\n<p>backup user credentials stored securely</p>\n<p>Test a restore to a second database name</p>\n<p>A backup you have never restored is just a theory.</p>\n<p>My recommendation in one sentence</p>\n<p>Use mariadb-backup full plus incrementals as your main backup system, keep binary logs enabled for point in time recovery, add a daily mariadb-dump as a second format, and sync all backups off the server automatically.</p>\n<p>Action plan you can apply now</p>\n<p>Confirm binary logging is enabled on 192.168.7.202</p>\n<p>Install and use mariadb-backup if it is not already installed</p>\n<p>Create a backup folder on a separate mounted drive</p>\n<p>Set nightly full backup</p>\n<p>Set 2 hour or 4 hour incremental backups</p>\n<p>Add 1 or 2 daily SQL dumps</p>\n<p>Sync backup files to another machine or disk</p>\n<p>Do one full restore test this week</p>\n<p>I can turn this into an exact setup for your box next, including:</p>\n<p>the MariaDB config changes</p>\n<p>backup user creation</p>\n<p>folder layout</p>\n<p>cron jobs</p>\n<p>full backup script</p>\n<p>incremental backup script</p>\n<p>SQL dump script</p>\n<p>restore instructions for ellas_alterations</p>","file_name":"SESSION_LOG_2026-04-05.md","stored_name":"20260405-133813-0749951d.md","mime_type":"application/octet-stream","file_size_bytes":"15661","storage_path":"/mnt/drive1/customerdb/backend/documents_storage/20260405-133813-0749951d.md","is_deleted":"0","created_at":"2026-04-05 09:38:13","updated_at":"2026-04-05 16:00:10"}
Changed Data
{"backend_document_id":"6","document_type":"markdown","title":"202 DB Save","slug":"202-db-save","summary_text":"Claude 04-05-2026 Changes","content_markdown":"ok ssh kefa@192.168.7.202 my MariaDB ellas_alterations i need a way to back this data up multiple times a day what are my options so we always have a fresh backup of data and not lose any data\r\n\r\nI’m checking the current MariaDB backup options so I can give you something solid, not hand waving. Then I’ll turn that into a setup that fits your 192.168.7.202 box and your “multiple backups a day, fresh data, don’t lose work” goal.\r\n\r\nThought for 11s\r\n\r\nYes. You absolutely can set this up so you are not relying on one lonely nightly dump and hoping for the best.\r\n\r\nThe direct answer is this:\r\n\r\nThe safest practical setup for your MariaDB on 192.168.7.202 is a layered backup plan:\r\n\r\nBinary logs turned on all the time\r\nA full physical backup every night with mariadb-backup\r\nIncremental backups several times a day\r\nA separate logical dump once or twice a day\r\nCopies of backups pushed to another drive or another machine\r\nRegular restore tests\r\n\r\nThat gives you fresh backups, fast recovery, and a way to restore to a specific point in time instead of just “whatever the last dump was.” MariaDB’s own docs distinguish logical backups from physical backups, recommend mariadb-backup for production style physical backups, and explain that point in time recovery uses a restored backup plus binary logs. The binary log records database changes, which is exactly what you want if your goal is to avoid losing recent data.\r\n\r\nYour real options\r\nOption 1. mariadb-dump\r\n\r\nThis makes SQL dump files.\r\n\r\nGood\r\n\r\nSimple\r\nEasy to restore a whole database or single tables\r\nNice extra safety copy\r\n\r\nBad\r\n\r\nSlower on big databases\r\nRestore takes longer\r\nNot ideal as your only “fresh all day” backup method\r\n\r\nMariaDB documents mariadb-dump as the logical backup tool, and logical backups recreate data using SQL statements like CREATE TABLE and INSERT.\r\n\r\nOption 2. mariadb-backup\r\n\r\nThis is the one I would treat as your main production backup.\r\n\r\nGood\r\n\r\nFaster and more serious than plain SQL dumps\r\nDesigned for full and incremental backups\r\nBetter for larger active databases\r\nSupports restore workflows needed for point in time recovery\r\n\r\nBad\r\n\r\nRestore procedure is a little more involved\r\nYou need to prepare and test restores properly\r\n\r\nMariaDB’s docs say mariadb-backup supports full and incremental backups, and that backups must be prepared before restore.\r\n\r\nOption 3. Binary logs for point in time recovery\r\n\r\nThis is what closes the “we just lost the last few hours” gap.\r\n\r\nGood\r\n\r\nLets you recover right up to a specific time\r\nBest protection against losing recent inserts, edits, deletes\r\nEssential if you want recovery between scheduled backups\r\n\r\nBad\r\n\r\nNot a complete backup by itself\r\nMust be combined with a full or incremental backup\r\nNeeds storage management so logs do not grow forever\r\n\r\nMariaDB states that point in time recovery restores a base backup first, then replays binary logs with mysqlbinlog, and that the binary log contains records of data changes.\r\n\r\nOption 4. Replication to another MariaDB server\r\n\r\nThis is more about high availability than backup.\r\n\r\nGood\r\n\r\nExtra live copy\r\nFast failover possibilities\r\n\r\nBad\r\n\r\nIf bad data or accidental deletes replicate, you can lose data on both\r\nStill does not replace backups\r\n\r\nI would only add this later if uptime becomes a bigger deal.\r\n\r\nOption 5. Filesystem snapshots\r\n\r\nThis can be excellent if the server storage is on ZFS, LVM snapshots, or similar.\r\n\r\nGood\r\n\r\nFast\r\nGreat for frequent local rollback points\r\n\r\nBad\r\n\r\nDepends on how your storage is built\r\nStill should not be your only backup\r\nSnapshot alone is not the same as off machine backup\r\nWhat I recommend for your server\r\n\r\nFor ellas_alterations, I would use this exact strategy:\r\n\r\nBest setup\r\n\r\nLayer 1\r\nTurn on binary logging and keep enough logs for several days.\r\n\r\nLayer 2\r\nRun one nightly full backup with mariadb-backup.\r\n\r\nLayer 3\r\nRun incremental backups every 4 hours during the day.\r\n\r\nLayer 4\r\nRun a logical SQL dump twice a day as a second safety net.\r\n\r\nLayer 5\r\nImmediately copy backups to a second location, ideally:\r\n\r\nanother drive like /mnt/drive1 or /mnt/drive2\r\nand even better, another machine on your network\r\n\r\nLayer 6\r\nDo a test restore regularly, even if only to a temp database or test server.\r\n\r\nThat combo is the sweet spot between safe, practical, and not overly crazy.\r\n\r\nWhy this is the best balance\r\n\r\nHere is the logic:\r\n\r\nmariadb-dump alone is easy but too slow and too coarse if you want frequent fresh recovery points\r\nmariadb-backup gives you strong base and incremental backups\r\nbinary logs fill in the gaps between those backups\r\noff server copies protect you if the whole machine dies\r\nrestore tests make sure your backups are not fake confidence\r\n\r\nThat is the difference between “we have backup files” and “we can actually recover.”\r\n\r\nA practical schedule\r\n\r\nHere is a schedule I would use for you:\r\n\r\nEvery night at 1:00 AM\r\n\r\nfull mariadb-backup\r\n\r\nEvery 4 hours\r\n\r\nincremental mariadb-backup\r\n\r\nEvery 12 hours\r\n\r\nmariadb-dump of ellas_alterations\r\n\r\nEvery hour\r\n\r\nsync backup folder to another disk or server\r\n\r\nBinary logs\r\n\r\nalways on\r\nkeep at least 7 to 14 days, depending on space\r\n\r\nIf your business data changes constantly and you are nervous about same day loss, make the incrementals every 2 hours instead of every 4.\r\n\r\nWhat “not lose any data” really means\r\n\r\nI want to be blunt here.\r\n\r\nYou can get very close to no data loss, but true zero loss is hard unless you move into high availability, synchronous replication, battery backed storage, and more advanced infrastructure.\r\n\r\nFor your setup, the closest practical answer is:\r\n\r\nfull backup nightly\r\nincrementals multiple times a day\r\nbinary logs always on\r\noff machine copies\r\n\r\nThat can reduce your recovery point objective from many hours down to minutes, depending on how current your binlogs and copied backup files are. MariaDB’s point in time recovery model is specifically built for this kind of restoration.\r\n\r\nThe simplest strong version\r\n\r\nIf you want the least complicated version that is still solid, do this:\r\n\r\nEnable binary logs\r\nNightly mariadb-backup full backup\r\nMidday mariadb-dump\r\nRsync everything to another drive and another server\r\nTest restoring once\r\n\r\nThat is already way better than what most small businesses do.\r\n\r\nWhat I would do on your machine\r\n\r\nBecause you already use Linux servers and separate storage, I would set it up like this conceptually:\r\n\r\nMariaDB live data stays where it is now\r\nbackups go to something like /mnt/drive1/mariadb_backups/ellas_alterations/\r\nstructure:\r\nfull/\r\nincremental/\r\nsql/\r\nbinlog_archive/\r\nlogs/\r\n\r\nThen automate with cron or systemd timers.\r\n\r\nStep by step reasoning\r\nYour risk is not just server failure. It is also accidental delete, bad update, corruption, and “oops we changed the wrong record.”\r\nA single daily dump is too wide a gap.\r\nPhysical backups recover faster than pure SQL dumps.\r\nIncrementals reduce backup load while keeping more recent restore points.\r\nBinary logs are the only practical way to replay changes after the last backup and recover to a chosen time.\r\nOff machine copies protect you from total server loss.\r\nRestore testing proves the backup is real.\r\nOther angles you might not have thought about\r\nKeep backups on a different machine\r\n\r\nIf the server dies, local backups on the same disk can die with it.\r\n\r\nKeep one logical dump even if you use physical backups\r\n\r\nSQL dumps are slower, but they are convenient when you need just one database or one table.\r\n\r\nWatch backup size and retention\r\n\r\nYou do not want binlogs filling the drive and choking the server.\r\n\r\nBack up credentials and config too\r\n\r\nNot just the database files.\r\nAlso save:\r\n\r\nMariaDB config\r\nbackup scripts\r\ncron jobs\r\nbackup user credentials stored securely\r\nTest a restore to a second database name\r\n\r\nA backup you have never restored is just a theory.\r\n\r\nMy recommendation in one sentence\r\n\r\nUse mariadb-backup full plus incrementals as your main backup system, keep binary logs enabled for point in time recovery, add a daily mariadb-dump as a second format, and sync all backups off the server automatically.\r\n\r\nAction plan you can apply now\r\nConfirm binary logging is enabled on 192.168.7.202\r\nInstall and use mariadb-backup if it is not already installed\r\nCreate a backup folder on a separate mounted drive\r\nSet nightly full backup\r\nSet 2 hour or 4 hour incremental backups\r\nAdd 1 or 2 daily SQL dumps\r\nSync backup files to another machine or disk\r\nDo one full restore test this week\r\n\r\nI can turn this into an exact setup for your box next, including:\r\n\r\nthe MariaDB config changes\r\nbackup user creation\r\nfolder layout\r\ncron jobs\r\nfull backup script\r\nincremental backup script\r\nSQL dump script\r\nrestore instructions for ellas_alterations","content_html":"<p>ok ssh kefa@192.168.7.202 my MariaDB ellas_alterations i need a way to back this data up multiple times a day what are my options so we always have a fresh backup of data and not lose any data</p>\n<p>I’m checking the current MariaDB backup options so I can give you something solid, not hand waving. Then I’ll turn that into a setup that fits your 192.168.7.202 box and your “multiple backups a day, fresh data, don’t lose work” goal.</p>\n<p>Thought for 11s</p>\n<p>Yes. You absolutely can set this up so you are not relying on one lonely nightly dump and hoping for the best.</p>\n<p>The direct answer is this:</p>\n<p>The safest practical setup for your MariaDB on 192.168.7.202 is a layered backup plan:</p>\n<p>Binary logs turned on all the time</p>\n<p>A full physical backup every night with mariadb-backup</p>\n<p>Incremental backups several times a day</p>\n<p>A separate logical dump once or twice a day</p>\n<p>Copies of backups pushed to another drive or another machine</p>\n<p>Regular restore tests</p>\n<p>That gives you fresh backups, fast recovery, and a way to restore to a specific point in time instead of just “whatever the last dump was.” MariaDB’s own docs distinguish logical backups from physical backups, recommend mariadb-backup for production style physical backups, and explain that point in time recovery uses a restored backup plus binary logs. The binary log records database changes, which is exactly what you want if your goal is to avoid losing recent data.</p>\n<p>Your real options</p>\n<p>Option 1. mariadb-dump</p>\n<p>This makes SQL dump files.</p>\n<p>Good</p>\n<p>Simple</p>\n<p>Easy to restore a whole database or single tables</p>\n<p>Nice extra safety copy</p>\n<p>Bad</p>\n<p>Slower on big databases</p>\n<p>Restore takes longer</p>\n<p>Not ideal as your only “fresh all day” backup method</p>\n<p>MariaDB documents mariadb-dump as the logical backup tool, and logical backups recreate data using SQL statements like CREATE TABLE and INSERT.</p>\n<p>Option 2. mariadb-backup</p>\n<p>This is the one I would treat as your main production backup.</p>\n<p>Good</p>\n<p>Faster and more serious than plain SQL dumps</p>\n<p>Designed for full and incremental backups</p>\n<p>Better for larger active databases</p>\n<p>Supports restore workflows needed for point in time recovery</p>\n<p>Bad</p>\n<p>Restore procedure is a little more involved</p>\n<p>You need to prepare and test restores properly</p>\n<p>MariaDB’s docs say mariadb-backup supports full and incremental backups, and that backups must be prepared before restore.</p>\n<p>Option 3. Binary logs for point in time recovery</p>\n<p>This is what closes the “we just lost the last few hours” gap.</p>\n<p>Good</p>\n<p>Lets you recover right up to a specific time</p>\n<p>Best protection against losing recent inserts, edits, deletes</p>\n<p>Essential if you want recovery between scheduled backups</p>\n<p>Bad</p>\n<p>Not a complete backup by itself</p>\n<p>Must be combined with a full or incremental backup</p>\n<p>Needs storage management so logs do not grow forever</p>\n<p>MariaDB states that point in time recovery restores a base backup first, then replays binary logs with mysqlbinlog, and that the binary log contains records of data changes.</p>\n<p>Option 4. Replication to another MariaDB server</p>\n<p>This is more about high availability than backup.</p>\n<p>Good</p>\n<p>Extra live copy</p>\n<p>Fast failover possibilities</p>\n<p>Bad</p>\n<p>If bad data or accidental deletes replicate, you can lose data on both</p>\n<p>Still does not replace backups</p>\n<p>I would only add this later if uptime becomes a bigger deal.</p>\n<p>Option 5. Filesystem snapshots</p>\n<p>This can be excellent if the server storage is on ZFS, LVM snapshots, or similar.</p>\n<p>Good</p>\n<p>Fast</p>\n<p>Great for frequent local rollback points</p>\n<p>Bad</p>\n<p>Depends on how your storage is built</p>\n<p>Still should not be your only backup</p>\n<p>Snapshot alone is not the same as off machine backup</p>\n<p>What I recommend for your server</p>\n<p>For ellas_alterations, I would use this exact strategy:</p>\n<p>Best setup</p>\n<p>Layer 1</p>\n<p>Turn on binary logging and keep enough logs for several days.</p>\n<p>Layer 2</p>\n<p>Run one nightly full backup with mariadb-backup.</p>\n<p>Layer 3</p>\n<p>Run incremental backups every 4 hours during the day.</p>\n<p>Layer 4</p>\n<p>Run a logical SQL dump twice a day as a second safety net.</p>\n<p>Layer 5</p>\n<p>Immediately copy backups to a second location, ideally:</p>\n<p>another drive like /mnt/drive1 or /mnt/drive2</p>\n<p>and even better, another machine on your network</p>\n<p>Layer 6</p>\n<p>Do a test restore regularly, even if only to a temp database or test server.</p>\n<p>That combo is the sweet spot between safe, practical, and not overly crazy.</p>\n<p>Why this is the best balance</p>\n<p>Here is the logic:</p>\n<p>mariadb-dump alone is easy but too slow and too coarse if you want frequent fresh recovery points</p>\n<p>mariadb-backup gives you strong base and incremental backups</p>\n<p>binary logs fill in the gaps between those backups</p>\n<p>off server copies protect you if the whole machine dies</p>\n<p>restore tests make sure your backups are not fake confidence</p>\n<p>That is the difference between “we have backup files” and “we can actually recover.”</p>\n<p>A practical schedule</p>\n<p>Here is a schedule I would use for you:</p>\n<p>Every night at 1:00 AM</p>\n<p>full mariadb-backup</p>\n<p>Every 4 hours</p>\n<p>incremental mariadb-backup</p>\n<p>Every 12 hours</p>\n<p>mariadb-dump of ellas_alterations</p>\n<p>Every hour</p>\n<p>sync backup folder to another disk or server</p>\n<p>Binary logs</p>\n<p>always on</p>\n<p>keep at least 7 to 14 days, depending on space</p>\n<p>If your business data changes constantly and you are nervous about same day loss, make the incrementals every 2 hours instead of every 4.</p>\n<p>What “not lose any data” really means</p>\n<p>I want to be blunt here.</p>\n<p>You can get very close to no data loss, but true zero loss is hard unless you move into high availability, synchronous replication, battery backed storage, and more advanced infrastructure.</p>\n<p>For your setup, the closest practical answer is:</p>\n<p>full backup nightly</p>\n<p>incrementals multiple times a day</p>\n<p>binary logs always on</p>\n<p>off machine copies</p>\n<p>That can reduce your recovery point objective from many hours down to minutes, depending on how current your binlogs and copied backup files are. MariaDB’s point in time recovery model is specifically built for this kind of restoration.</p>\n<p>The simplest strong version</p>\n<p>If you want the least complicated version that is still solid, do this:</p>\n<p>Enable binary logs</p>\n<p>Nightly mariadb-backup full backup</p>\n<p>Midday mariadb-dump</p>\n<p>Rsync everything to another drive and another server</p>\n<p>Test restoring once</p>\n<p>That is already way better than what most small businesses do.</p>\n<p>What I would do on your machine</p>\n<p>Because you already use Linux servers and separate storage, I would set it up like this conceptually:</p>\n<p>MariaDB live data stays where it is now</p>\n<p>backups go to something like /mnt/drive1/mariadb_backups/ellas_alterations/</p>\n<p>structure:</p>\n<p>full/</p>\n<p>incremental/</p>\n<p>sql/</p>\n<p>binlog_archive/</p>\n<p>logs/</p>\n<p>Then automate with cron or systemd timers.</p>\n<p>Step by step reasoning</p>\n<p>Your risk is not just server failure. It is also accidental delete, bad update, corruption, and “oops we changed the wrong record.”</p>\n<p>A single daily dump is too wide a gap.</p>\n<p>Physical backups recover faster than pure SQL dumps.</p>\n<p>Incrementals reduce backup load while keeping more recent restore points.</p>\n<p>Binary logs are the only practical way to replay changes after the last backup and recover to a chosen time.</p>\n<p>Off machine copies protect you from total server loss.</p>\n<p>Restore testing proves the backup is real.</p>\n<p>Other angles you might not have thought about</p>\n<p>Keep backups on a different machine</p>\n<p>If the server dies, local backups on the same disk can die with it.</p>\n<p>Keep one logical dump even if you use physical backups</p>\n<p>SQL dumps are slower, but they are convenient when you need just one database or one table.</p>\n<p>Watch backup size and retention</p>\n<p>You do not want binlogs filling the drive and choking the server.</p>\n<p>Back up credentials and config too</p>\n<p>Not just the database files.</p>\n<p>Also save:</p>\n<p>MariaDB config</p>\n<p>backup scripts</p>\n<p>cron jobs</p>\n<p>backup user credentials stored securely</p>\n<p>Test a restore to a second database name</p>\n<p>A backup you have never restored is just a theory.</p>\n<p>My recommendation in one sentence</p>\n<p>Use mariadb-backup full plus incrementals as your main backup system, keep binary logs enabled for point in time recovery, add a daily mariadb-dump as a second format, and sync all backups off the server automatically.</p>\n<p>Action plan you can apply now</p>\n<p>Confirm binary logging is enabled on 192.168.7.202</p>\n<p>Install and use mariadb-backup if it is not already installed</p>\n<p>Create a backup folder on a separate mounted drive</p>\n<p>Set nightly full backup</p>\n<p>Set 2 hour or 4 hour incremental backups</p>\n<p>Add 1 or 2 daily SQL dumps</p>\n<p>Sync backup files to another machine or disk</p>\n<p>Do one full restore test this week</p>\n<p>I can turn this into an exact setup for your box next, including:</p>\n<p>the MariaDB config changes</p>\n<p>backup user creation</p>\n<p>folder layout</p>\n<p>cron jobs</p>\n<p>full backup script</p>\n<p>incremental backup script</p>\n<p>SQL dump script</p>\n<p>restore instructions for ellas_alterations</p>","file_name":"SESSION_LOG_2026-04-05.md","stored_name":"20260405-133813-0749951d.md","mime_type":"application/octet-stream","file_size_bytes":"15661","storage_path":"/mnt/drive1/customerdb/backend/documents_storage/20260405-133813-0749951d.md","is_deleted":"0","created_at":"2026-04-05 09:38:13","updated_at":"2026-04-05 16:01:53"}
documents · save
2026-04-05 16:00:10 · anonymous
backend_document #6
Context
{"document_type":"upload"}
Before
{"backend_document_id":"6","document_type":"upload","title":"Claude 04-05-2026 Changes","slug":"claude-04-05-2026-changes","summary_text":"Claude 04-05-2026 Changes","content_markdown":null,"content_html":null,"file_name":"SESSION_LOG_2026-04-05.md","stored_name":"20260405-133813-0749951d.md","mime_type":"application/octet-stream","file_size_bytes":"15661","storage_path":"/mnt/drive1/customerdb/backend/documents_storage/20260405-133813-0749951d.md","is_deleted":"0","created_at":"2026-04-05 09:38:13","updated_at":"2026-04-05 09:38:13"}
Changed Data
{"backend_document_id":"6","document_type":"upload","title":"Claude 04-05-2026 Changes","slug":"claude-04-05-2026-changes","summary_text":"Claude 04-05-2026 Changes","content_markdown":"ok ssh kefa@192.168.7.202 my MariaDB ellas_alterations i need a way to back this data up multiple times a day what are my options so we always have a fresh backup of data and not lose any data\r\n\r\nI’m checking the current MariaDB backup options so I can give you something solid, not hand waving. Then I’ll turn that into a setup that fits your 192.168.7.202 box and your “multiple backups a day, fresh data, don’t lose work” goal.\r\n\r\nThought for 11s\r\n\r\nYes. You absolutely can set this up so you are not relying on one lonely nightly dump and hoping for the best.\r\n\r\nThe direct answer is this:\r\n\r\nThe safest practical setup for your MariaDB on 192.168.7.202 is a layered backup plan:\r\n\r\nBinary logs turned on all the time\r\nA full physical backup every night with mariadb-backup\r\nIncremental backups several times a day\r\nA separate logical dump once or twice a day\r\nCopies of backups pushed to another drive or another machine\r\nRegular restore tests\r\n\r\nThat gives you fresh backups, fast recovery, and a way to restore to a specific point in time instead of just “whatever the last dump was.” MariaDB’s own docs distinguish logical backups from physical backups, recommend mariadb-backup for production style physical backups, and explain that point in time recovery uses a restored backup plus binary logs. The binary log records database changes, which is exactly what you want if your goal is to avoid losing recent data.\r\n\r\nYour real options\r\nOption 1. mariadb-dump\r\n\r\nThis makes SQL dump files.\r\n\r\nGood\r\n\r\nSimple\r\nEasy to restore a whole database or single tables\r\nNice extra safety copy\r\n\r\nBad\r\n\r\nSlower on big databases\r\nRestore takes longer\r\nNot ideal as your only “fresh all day” backup method\r\n\r\nMariaDB documents mariadb-dump as the logical backup tool, and logical backups recreate data using SQL statements like CREATE TABLE and INSERT.\r\n\r\nOption 2. mariadb-backup\r\n\r\nThis is the one I would treat as your main production backup.\r\n\r\nGood\r\n\r\nFaster and more serious than plain SQL dumps\r\nDesigned for full and incremental backups\r\nBetter for larger active databases\r\nSupports restore workflows needed for point in time recovery\r\n\r\nBad\r\n\r\nRestore procedure is a little more involved\r\nYou need to prepare and test restores properly\r\n\r\nMariaDB’s docs say mariadb-backup supports full and incremental backups, and that backups must be prepared before restore.\r\n\r\nOption 3. Binary logs for point in time recovery\r\n\r\nThis is what closes the “we just lost the last few hours” gap.\r\n\r\nGood\r\n\r\nLets you recover right up to a specific time\r\nBest protection against losing recent inserts, edits, deletes\r\nEssential if you want recovery between scheduled backups\r\n\r\nBad\r\n\r\nNot a complete backup by itself\r\nMust be combined with a full or incremental backup\r\nNeeds storage management so logs do not grow forever\r\n\r\nMariaDB states that point in time recovery restores a base backup first, then replays binary logs with mysqlbinlog, and that the binary log contains records of data changes.\r\n\r\nOption 4. Replication to another MariaDB server\r\n\r\nThis is more about high availability than backup.\r\n\r\nGood\r\n\r\nExtra live copy\r\nFast failover possibilities\r\n\r\nBad\r\n\r\nIf bad data or accidental deletes replicate, you can lose data on both\r\nStill does not replace backups\r\n\r\nI would only add this later if uptime becomes a bigger deal.\r\n\r\nOption 5. Filesystem snapshots\r\n\r\nThis can be excellent if the server storage is on ZFS, LVM snapshots, or similar.\r\n\r\nGood\r\n\r\nFast\r\nGreat for frequent local rollback points\r\n\r\nBad\r\n\r\nDepends on how your storage is built\r\nStill should not be your only backup\r\nSnapshot alone is not the same as off machine backup\r\nWhat I recommend for your server\r\n\r\nFor ellas_alterations, I would use this exact strategy:\r\n\r\nBest setup\r\n\r\nLayer 1\r\nTurn on binary logging and keep enough logs for several days.\r\n\r\nLayer 2\r\nRun one nightly full backup with mariadb-backup.\r\n\r\nLayer 3\r\nRun incremental backups every 4 hours during the day.\r\n\r\nLayer 4\r\nRun a logical SQL dump twice a day as a second safety net.\r\n\r\nLayer 5\r\nImmediately copy backups to a second location, ideally:\r\n\r\nanother drive like /mnt/drive1 or /mnt/drive2\r\nand even better, another machine on your network\r\n\r\nLayer 6\r\nDo a test restore regularly, even if only to a temp database or test server.\r\n\r\nThat combo is the sweet spot between safe, practical, and not overly crazy.\r\n\r\nWhy this is the best balance\r\n\r\nHere is the logic:\r\n\r\nmariadb-dump alone is easy but too slow and too coarse if you want frequent fresh recovery points\r\nmariadb-backup gives you strong base and incremental backups\r\nbinary logs fill in the gaps between those backups\r\noff server copies protect you if the whole machine dies\r\nrestore tests make sure your backups are not fake confidence\r\n\r\nThat is the difference between “we have backup files” and “we can actually recover.”\r\n\r\nA practical schedule\r\n\r\nHere is a schedule I would use for you:\r\n\r\nEvery night at 1:00 AM\r\n\r\nfull mariadb-backup\r\n\r\nEvery 4 hours\r\n\r\nincremental mariadb-backup\r\n\r\nEvery 12 hours\r\n\r\nmariadb-dump of ellas_alterations\r\n\r\nEvery hour\r\n\r\nsync backup folder to another disk or server\r\n\r\nBinary logs\r\n\r\nalways on\r\nkeep at least 7 to 14 days, depending on space\r\n\r\nIf your business data changes constantly and you are nervous about same day loss, make the incrementals every 2 hours instead of every 4.\r\n\r\nWhat “not lose any data” really means\r\n\r\nI want to be blunt here.\r\n\r\nYou can get very close to no data loss, but true zero loss is hard unless you move into high availability, synchronous replication, battery backed storage, and more advanced infrastructure.\r\n\r\nFor your setup, the closest practical answer is:\r\n\r\nfull backup nightly\r\nincrementals multiple times a day\r\nbinary logs always on\r\noff machine copies\r\n\r\nThat can reduce your recovery point objective from many hours down to minutes, depending on how current your binlogs and copied backup files are. MariaDB’s point in time recovery model is specifically built for this kind of restoration.\r\n\r\nThe simplest strong version\r\n\r\nIf you want the least complicated version that is still solid, do this:\r\n\r\nEnable binary logs\r\nNightly mariadb-backup full backup\r\nMidday mariadb-dump\r\nRsync everything to another drive and another server\r\nTest restoring once\r\n\r\nThat is already way better than what most small businesses do.\r\n\r\nWhat I would do on your machine\r\n\r\nBecause you already use Linux servers and separate storage, I would set it up like this conceptually:\r\n\r\nMariaDB live data stays where it is now\r\nbackups go to something like /mnt/drive1/mariadb_backups/ellas_alterations/\r\nstructure:\r\nfull/\r\nincremental/\r\nsql/\r\nbinlog_archive/\r\nlogs/\r\n\r\nThen automate with cron or systemd timers.\r\n\r\nStep by step reasoning\r\nYour risk is not just server failure. It is also accidental delete, bad update, corruption, and “oops we changed the wrong record.”\r\nA single daily dump is too wide a gap.\r\nPhysical backups recover faster than pure SQL dumps.\r\nIncrementals reduce backup load while keeping more recent restore points.\r\nBinary logs are the only practical way to replay changes after the last backup and recover to a chosen time.\r\nOff machine copies protect you from total server loss.\r\nRestore testing proves the backup is real.\r\nOther angles you might not have thought about\r\nKeep backups on a different machine\r\n\r\nIf the server dies, local backups on the same disk can die with it.\r\n\r\nKeep one logical dump even if you use physical backups\r\n\r\nSQL dumps are slower, but they are convenient when you need just one database or one table.\r\n\r\nWatch backup size and retention\r\n\r\nYou do not want binlogs filling the drive and choking the server.\r\n\r\nBack up credentials and config too\r\n\r\nNot just the database files.\r\nAlso save:\r\n\r\nMariaDB config\r\nbackup scripts\r\ncron jobs\r\nbackup user credentials stored securely\r\nTest a restore to a second database name\r\n\r\nA backup you have never restored is just a theory.\r\n\r\nMy recommendation in one sentence\r\n\r\nUse mariadb-backup full plus incrementals as your main backup system, keep binary logs enabled for point in time recovery, add a daily mariadb-dump as a second format, and sync all backups off the server automatically.\r\n\r\nAction plan you can apply now\r\nConfirm binary logging is enabled on 192.168.7.202\r\nInstall and use mariadb-backup if it is not already installed\r\nCreate a backup folder on a separate mounted drive\r\nSet nightly full backup\r\nSet 2 hour or 4 hour incremental backups\r\nAdd 1 or 2 daily SQL dumps\r\nSync backup files to another machine or disk\r\nDo one full restore test this week\r\n\r\nI can turn this into an exact setup for your box next, including:\r\n\r\nthe MariaDB config changes\r\nbackup user creation\r\nfolder layout\r\ncron jobs\r\nfull backup script\r\nincremental backup script\r\nSQL dump script\r\nrestore instructions for ellas_alterations","content_html":"<p>ok ssh kefa@192.168.7.202 my MariaDB ellas_alterations i need a way to back this data up multiple times a day what are my options so we always have a fresh backup of data and not lose any data</p>\n<p>I’m checking the current MariaDB backup options so I can give you something solid, not hand waving. Then I’ll turn that into a setup that fits your 192.168.7.202 box and your “multiple backups a day, fresh data, don’t lose work” goal.</p>\n<p>Thought for 11s</p>\n<p>Yes. You absolutely can set this up so you are not relying on one lonely nightly dump and hoping for the best.</p>\n<p>The direct answer is this:</p>\n<p>The safest practical setup for your MariaDB on 192.168.7.202 is a layered backup plan:</p>\n<p>Binary logs turned on all the time</p>\n<p>A full physical backup every night with mariadb-backup</p>\n<p>Incremental backups several times a day</p>\n<p>A separate logical dump once or twice a day</p>\n<p>Copies of backups pushed to another drive or another machine</p>\n<p>Regular restore tests</p>\n<p>That gives you fresh backups, fast recovery, and a way to restore to a specific point in time instead of just “whatever the last dump was.” MariaDB’s own docs distinguish logical backups from physical backups, recommend mariadb-backup for production style physical backups, and explain that point in time recovery uses a restored backup plus binary logs. The binary log records database changes, which is exactly what you want if your goal is to avoid losing recent data.</p>\n<p>Your real options</p>\n<p>Option 1. mariadb-dump</p>\n<p>This makes SQL dump files.</p>\n<p>Good</p>\n<p>Simple</p>\n<p>Easy to restore a whole database or single tables</p>\n<p>Nice extra safety copy</p>\n<p>Bad</p>\n<p>Slower on big databases</p>\n<p>Restore takes longer</p>\n<p>Not ideal as your only “fresh all day” backup method</p>\n<p>MariaDB documents mariadb-dump as the logical backup tool, and logical backups recreate data using SQL statements like CREATE TABLE and INSERT.</p>\n<p>Option 2. mariadb-backup</p>\n<p>This is the one I would treat as your main production backup.</p>\n<p>Good</p>\n<p>Faster and more serious than plain SQL dumps</p>\n<p>Designed for full and incremental backups</p>\n<p>Better for larger active databases</p>\n<p>Supports restore workflows needed for point in time recovery</p>\n<p>Bad</p>\n<p>Restore procedure is a little more involved</p>\n<p>You need to prepare and test restores properly</p>\n<p>MariaDB’s docs say mariadb-backup supports full and incremental backups, and that backups must be prepared before restore.</p>\n<p>Option 3. Binary logs for point in time recovery</p>\n<p>This is what closes the “we just lost the last few hours” gap.</p>\n<p>Good</p>\n<p>Lets you recover right up to a specific time</p>\n<p>Best protection against losing recent inserts, edits, deletes</p>\n<p>Essential if you want recovery between scheduled backups</p>\n<p>Bad</p>\n<p>Not a complete backup by itself</p>\n<p>Must be combined with a full or incremental backup</p>\n<p>Needs storage management so logs do not grow forever</p>\n<p>MariaDB states that point in time recovery restores a base backup first, then replays binary logs with mysqlbinlog, and that the binary log contains records of data changes.</p>\n<p>Option 4. Replication to another MariaDB server</p>\n<p>This is more about high availability than backup.</p>\n<p>Good</p>\n<p>Extra live copy</p>\n<p>Fast failover possibilities</p>\n<p>Bad</p>\n<p>If bad data or accidental deletes replicate, you can lose data on both</p>\n<p>Still does not replace backups</p>\n<p>I would only add this later if uptime becomes a bigger deal.</p>\n<p>Option 5. Filesystem snapshots</p>\n<p>This can be excellent if the server storage is on ZFS, LVM snapshots, or similar.</p>\n<p>Good</p>\n<p>Fast</p>\n<p>Great for frequent local rollback points</p>\n<p>Bad</p>\n<p>Depends on how your storage is built</p>\n<p>Still should not be your only backup</p>\n<p>Snapshot alone is not the same as off machine backup</p>\n<p>What I recommend for your server</p>\n<p>For ellas_alterations, I would use this exact strategy:</p>\n<p>Best setup</p>\n<p>Layer 1</p>\n<p>Turn on binary logging and keep enough logs for several days.</p>\n<p>Layer 2</p>\n<p>Run one nightly full backup with mariadb-backup.</p>\n<p>Layer 3</p>\n<p>Run incremental backups every 4 hours during the day.</p>\n<p>Layer 4</p>\n<p>Run a logical SQL dump twice a day as a second safety net.</p>\n<p>Layer 5</p>\n<p>Immediately copy backups to a second location, ideally:</p>\n<p>another drive like /mnt/drive1 or /mnt/drive2</p>\n<p>and even better, another machine on your network</p>\n<p>Layer 6</p>\n<p>Do a test restore regularly, even if only to a temp database or test server.</p>\n<p>That combo is the sweet spot between safe, practical, and not overly crazy.</p>\n<p>Why this is the best balance</p>\n<p>Here is the logic:</p>\n<p>mariadb-dump alone is easy but too slow and too coarse if you want frequent fresh recovery points</p>\n<p>mariadb-backup gives you strong base and incremental backups</p>\n<p>binary logs fill in the gaps between those backups</p>\n<p>off server copies protect you if the whole machine dies</p>\n<p>restore tests make sure your backups are not fake confidence</p>\n<p>That is the difference between “we have backup files” and “we can actually recover.”</p>\n<p>A practical schedule</p>\n<p>Here is a schedule I would use for you:</p>\n<p>Every night at 1:00 AM</p>\n<p>full mariadb-backup</p>\n<p>Every 4 hours</p>\n<p>incremental mariadb-backup</p>\n<p>Every 12 hours</p>\n<p>mariadb-dump of ellas_alterations</p>\n<p>Every hour</p>\n<p>sync backup folder to another disk or server</p>\n<p>Binary logs</p>\n<p>always on</p>\n<p>keep at least 7 to 14 days, depending on space</p>\n<p>If your business data changes constantly and you are nervous about same day loss, make the incrementals every 2 hours instead of every 4.</p>\n<p>What “not lose any data” really means</p>\n<p>I want to be blunt here.</p>\n<p>You can get very close to no data loss, but true zero loss is hard unless you move into high availability, synchronous replication, battery backed storage, and more advanced infrastructure.</p>\n<p>For your setup, the closest practical answer is:</p>\n<p>full backup nightly</p>\n<p>incrementals multiple times a day</p>\n<p>binary logs always on</p>\n<p>off machine copies</p>\n<p>That can reduce your recovery point objective from many hours down to minutes, depending on how current your binlogs and copied backup files are. MariaDB’s point in time recovery model is specifically built for this kind of restoration.</p>\n<p>The simplest strong version</p>\n<p>If you want the least complicated version that is still solid, do this:</p>\n<p>Enable binary logs</p>\n<p>Nightly mariadb-backup full backup</p>\n<p>Midday mariadb-dump</p>\n<p>Rsync everything to another drive and another server</p>\n<p>Test restoring once</p>\n<p>That is already way better than what most small businesses do.</p>\n<p>What I would do on your machine</p>\n<p>Because you already use Linux servers and separate storage, I would set it up like this conceptually:</p>\n<p>MariaDB live data stays where it is now</p>\n<p>backups go to something like /mnt/drive1/mariadb_backups/ellas_alterations/</p>\n<p>structure:</p>\n<p>full/</p>\n<p>incremental/</p>\n<p>sql/</p>\n<p>binlog_archive/</p>\n<p>logs/</p>\n<p>Then automate with cron or systemd timers.</p>\n<p>Step by step reasoning</p>\n<p>Your risk is not just server failure. It is also accidental delete, bad update, corruption, and “oops we changed the wrong record.”</p>\n<p>A single daily dump is too wide a gap.</p>\n<p>Physical backups recover faster than pure SQL dumps.</p>\n<p>Incrementals reduce backup load while keeping more recent restore points.</p>\n<p>Binary logs are the only practical way to replay changes after the last backup and recover to a chosen time.</p>\n<p>Off machine copies protect you from total server loss.</p>\n<p>Restore testing proves the backup is real.</p>\n<p>Other angles you might not have thought about</p>\n<p>Keep backups on a different machine</p>\n<p>If the server dies, local backups on the same disk can die with it.</p>\n<p>Keep one logical dump even if you use physical backups</p>\n<p>SQL dumps are slower, but they are convenient when you need just one database or one table.</p>\n<p>Watch backup size and retention</p>\n<p>You do not want binlogs filling the drive and choking the server.</p>\n<p>Back up credentials and config too</p>\n<p>Not just the database files.</p>\n<p>Also save:</p>\n<p>MariaDB config</p>\n<p>backup scripts</p>\n<p>cron jobs</p>\n<p>backup user credentials stored securely</p>\n<p>Test a restore to a second database name</p>\n<p>A backup you have never restored is just a theory.</p>\n<p>My recommendation in one sentence</p>\n<p>Use mariadb-backup full plus incrementals as your main backup system, keep binary logs enabled for point in time recovery, add a daily mariadb-dump as a second format, and sync all backups off the server automatically.</p>\n<p>Action plan you can apply now</p>\n<p>Confirm binary logging is enabled on 192.168.7.202</p>\n<p>Install and use mariadb-backup if it is not already installed</p>\n<p>Create a backup folder on a separate mounted drive</p>\n<p>Set nightly full backup</p>\n<p>Set 2 hour or 4 hour incremental backups</p>\n<p>Add 1 or 2 daily SQL dumps</p>\n<p>Sync backup files to another machine or disk</p>\n<p>Do one full restore test this week</p>\n<p>I can turn this into an exact setup for your box next, including:</p>\n<p>the MariaDB config changes</p>\n<p>backup user creation</p>\n<p>folder layout</p>\n<p>cron jobs</p>\n<p>full backup script</p>\n<p>incremental backup script</p>\n<p>SQL dump script</p>\n<p>restore instructions for ellas_alterations</p>","file_name":"SESSION_LOG_2026-04-05.md","stored_name":"20260405-133813-0749951d.md","mime_type":"application/octet-stream","file_size_bytes":"15661","storage_path":"/mnt/drive1/customerdb/backend/documents_storage/20260405-133813-0749951d.md","is_deleted":"0","created_at":"2026-04-05 09:38:13","updated_at":"2026-04-05 16:00:10"}
schedule · create
2026-04-05 15:52:57 · admin@ellasalterations.com
appointment_workflow #0
Context
{"workflow_status":"no_show","visit_id":129002,"order_id":103755}
Before
[]
Changed Data
{"workflow_id":"3","source":"visit","source_key":"129002","appointment_date":"2026-04-05","customer_id":"488983","visit_id":"129002","order_id":null,"workflow_status":"no_show","payment_state":null,"payment_method":null,"payment_amount":"0.00","paid_in_advance":"0","report_required":"0","report_attached":"0","report_checked_at":"2026-04-05 15:52:57","provider_notes_text":null,"operator_notes":null,"no_show_email_key":"none","created_at":"2026-04-05 15:52:57","updated_at":"2026-04-05 15:52:57"}
schedule · save
2026-04-05 15:33:07 · admin@ellasalterations.com
appointment_workflow #0
Context
{"workflow_status":"no_show","visit_id":129003,"order_id":103756}
Before
{"workflow_id":"2","source":"visit","source_key":"129003","appointment_date":"2026-04-05","customer_id":"3535","visit_id":"129003","order_id":null,"workflow_status":"no_show","payment_state":null,"payment_method":null,"payment_amount":"0.00","paid_in_advance":"0","report_required":"0","report_attached":"0","report_checked_at":"2026-04-05 15:14:46","provider_notes_text":null,"operator_notes":null,"no_show_email_key":"none","created_at":"2026-04-05 14:29:44","updated_at":"2026-04-05 15:14:46"}
Changed Data
{"workflow_id":"2","source":"visit","source_key":"129003","appointment_date":"2026-04-05","customer_id":"3535","visit_id":"129003","order_id":null,"workflow_status":"no_show","payment_state":null,"payment_method":null,"payment_amount":"0.00","paid_in_advance":"0","report_required":"0","report_attached":"0","report_checked_at":"2026-04-05 15:33:07","provider_notes_text":null,"operator_notes":null,"no_show_email_key":"none","created_at":"2026-04-05 14:29:44","updated_at":"2026-04-05 15:33:07"}
schedule · save
2026-04-05 15:14:46 · admin@ellasalterations.com
appointment_workflow #0
Context
{"workflow_status":"no_show","visit_id":129003,"order_id":103756}
Before
{"workflow_id":"2","source":"visit","source_key":"129003","appointment_date":"2026-04-05","customer_id":"3535","visit_id":"129003","order_id":null,"workflow_status":"no_show","payment_state":null,"payment_method":null,"payment_amount":"0.00","paid_in_advance":"0","report_required":"0","report_attached":"0","report_checked_at":"2026-04-05 14:29:44","provider_notes_text":null,"operator_notes":null,"no_show_email_key":"none","created_at":"2026-04-05 14:29:44","updated_at":"2026-04-05 14:29:44"}
Changed Data
{"workflow_id":"2","source":"visit","source_key":"129003","appointment_date":"2026-04-05","customer_id":"3535","visit_id":"129003","order_id":null,"workflow_status":"no_show","payment_state":null,"payment_method":null,"payment_amount":"0.00","paid_in_advance":"0","report_required":"0","report_attached":"0","report_checked_at":"2026-04-05 15:14:46","provider_notes_text":null,"operator_notes":null,"no_show_email_key":"none","created_at":"2026-04-05 14:29:44","updated_at":"2026-04-05 15:14:46"}
schedule · create
2026-04-05 14:29:44 · cli
appointment_workflow #0
Context
{"workflow_status":"no_show","visit_id":129003,"order_id":103756}
Before
[]
Changed Data
{"workflow_id":"2","source":"visit","source_key":"129003","appointment_date":"2026-04-05","customer_id":"3535","visit_id":"129003","order_id":null,"workflow_status":"no_show","payment_state":null,"payment_method":null,"payment_amount":"0.00","paid_in_advance":"0","report_required":"0","report_attached":"0","report_checked_at":"2026-04-05 14:29:44","provider_notes_text":null,"operator_notes":null,"no_show_email_key":"none","created_at":"2026-04-05 14:29:44","updated_at":"2026-04-05 14:29:44"}
documents · upload
2026-04-05 09:38:13 · anonymous
backend_document #6
Context
{"file_name":"SESSION_LOG_2026-04-05.md","mime_type":"application/octet-stream"}
Before
[]
Changed Data
{"backend_document_id":"6","document_type":"upload","title":"Claude 04-05-2026 Changes","slug":"claude-04-05-2026-changes","summary_text":"Claude 04-05-2026 Changes","content_markdown":null,"content_html":null,"file_name":"SESSION_LOG_2026-04-05.md","stored_name":"20260405-133813-0749951d.md","mime_type":"application/octet-stream","file_size_bytes":"15661","storage_path":"/mnt/drive1/customerdb/backend/documents_storage/20260405-133813-0749951d.md","is_deleted":"0","created_at":"2026-04-05 09:38:13","updated_at":"2026-04-05 09:38:13"}
documents · upload
2026-04-05 09:37:29 · anonymous
backend_document #5
Context
{"file_name":"MORNING_CHANGES_2026-04-05.md","mime_type":"application/octet-stream"}
Before
[]
Changed Data
{"backend_document_id":"5","document_type":"upload","title":"Codex 04-05-2026 Changes","slug":"codex-04-05-2026-changes","summary_text":"Codex 04-05-2026 Changes","content_markdown":null,"content_html":null,"file_name":"MORNING_CHANGES_2026-04-05.md","stored_name":"20260405-133729-718b7452.md","mime_type":"application/octet-stream","file_size_bytes":"7821","storage_path":"/mnt/drive1/customerdb/backend/documents_storage/20260405-133729-718b7452.md","is_deleted":"0","created_at":"2026-04-05 09:37:29","updated_at":"2026-04-05 09:37:29"}
documents · upload
2026-04-05 08:29:33 · anonymous
backend_document #4
Context
{"file_name":"PROJECT_STATUS.md","mime_type":"application/octet-stream"}
Before
[]
Changed Data
{"backend_document_id":"4","document_type":"upload","title":"04-04-2026 All Claude Documentation","slug":"04-04-2026-all-claude-documentation","summary_text":"04-04-2026 All Claude Documentation","content_markdown":null,"content_html":null,"file_name":"PROJECT_STATUS.md","stored_name":"20260405-122933-c684516a.md","mime_type":"application/octet-stream","file_size_bytes":"61168","storage_path":"/mnt/drive1/customerdb/backend/documents_storage/20260405-122933-c684516a.md","is_deleted":"0","created_at":"2026-04-05 08:29:33","updated_at":"2026-04-05 08:29:33"}
documents · upload
2026-04-05 08:28:38 · anonymous
backend_document #3
Context
{"file_name":"IMPLEMENTATION_STATUS_AND_TEST_PLAN_2026-04-04.md","mime_type":"application/octet-stream"}
Before
[]
Changed Data
{"backend_document_id":"3","document_type":"upload","title":"04-04-2026 All Codex Documentation","slug":"04-04-2026-all-codex-documentation","summary_text":"04-04-2026 All Codex Documentation","content_markdown":null,"content_html":null,"file_name":"IMPLEMENTATION_STATUS_AND_TEST_PLAN_2026-04-04.md","stored_name":"20260405-122838-08b37e3f.md","mime_type":"application/octet-stream","file_size_bytes":"28700","storage_path":"/mnt/drive1/customerdb/backend/documents_storage/20260405-122838-08b37e3f.md","is_deleted":"0","created_at":"2026-04-05 08:28:38","updated_at":"2026-04-05 08:28:38"}
documents · save
2026-04-03 08:04:41 · anonymous
backend_document #2
Context
{"document_type":"markdown"}
Before
{"backend_document_id":"2","document_type":"markdown","title":"Test Note","slug":"test-note","summary_text":"This is test note summary","content_markdown":"On the settings side, settings.html now has a slideshow image manager for customerdb/webui/slideshow/ImageFiles, and I added slideshow_list.php, slideshow_upload.php, and slideshow_delete.php. I also fixed the webui settings endpoints so theme_font_size, theme_font_weight, theme_heading_weight, and nightly_report_to_email are saved and applied in settings_save.php, settings_get.php, and settings_css.php.\r\n\r\nWhat to test now:\r\n\r\nOpen https://ella.floridaalterations.com/backend/ and confirm the new Open Documents and View Logs links are there.\r\nOpen https://ella.floridaalterations.com/backend/documents.php, create a markdown doc, edit it, upload a file, and delete one document.","content_html":"<p>On the settings side, settings.html now has a slideshow image manager for customerdb/webui/slideshow/ImageFiles, and I added slideshow_list.php, slideshow_upload.php, and slideshow_delete.php. I also fixed the webui settings endpoints so theme_font_size, theme_font_weight, theme_heading_weight, and nightly_report_to_email are saved and applied in settings_save.php, settings_get.php, and settings_css.php.</p>\n<p>What to test now:</p>\n<p>Open https://ella.floridaalterations.com/backend/ and confirm the new Open Documents and View Logs links are there.</p>\n<p>Open https://ella.floridaalterations.com/backend/documents.php, create a markdown doc, edit it, upload a file, and delete one document.</p>","file_name":null,"stored_name":null,"mime_type":null,"file_size_bytes":null,"storage_path":null,"is_deleted":"0","created_at":"2026-04-02 15:00:01","updated_at":"2026-04-02 15:00:01"}
Changed Data
{"backend_document_id":"2","document_type":"markdown","title":"Test Note","slug":"test-note","summary_text":"This is test note summary","content_markdown":"# test\r\n\r\n## second line\r\n\r\n---\r\n\r\n- list\r\n- sls\r\n- sdfas'\r\n- asdasdf","content_html":"<h1>test</h1>\n<h2>second line</h2>\n<p>---</p>\n<ul>\n<li>list</li>\n<li>sls</li>\n<li>sdfas&#039;</li>\n<li>asdasdf</li>\n</ul>","file_name":null,"stored_name":null,"mime_type":null,"file_size_bytes":null,"storage_path":null,"is_deleted":"0","created_at":"2026-04-02 15:00:01","updated_at":"2026-04-03 08:04:41"}
documents · create
2026-04-02 15:00:01 · anonymous
backend_document #2
Context
{"document_type":"markdown"}
Before
[]
Changed Data
{"backend_document_id":"2","document_type":"markdown","title":"Test Note","slug":"test-note","summary_text":"This is test note summary","content_markdown":"On the settings side, settings.html now has a slideshow image manager for customerdb/webui/slideshow/ImageFiles, and I added slideshow_list.php, slideshow_upload.php, and slideshow_delete.php. I also fixed the webui settings endpoints so theme_font_size, theme_font_weight, theme_heading_weight, and nightly_report_to_email are saved and applied in settings_save.php, settings_get.php, and settings_css.php.\r\n\r\nWhat to test now:\r\n\r\nOpen https://ella.floridaalterations.com/backend/ and confirm the new Open Documents and View Logs links are there.\r\nOpen https://ella.floridaalterations.com/backend/documents.php, create a markdown doc, edit it, upload a file, and delete one document.","content_html":"<p>On the settings side, settings.html now has a slideshow image manager for customerdb/webui/slideshow/ImageFiles, and I added slideshow_list.php, slideshow_upload.php, and slideshow_delete.php. I also fixed the webui settings endpoints so theme_font_size, theme_font_weight, theme_heading_weight, and nightly_report_to_email are saved and applied in settings_save.php, settings_get.php, and settings_css.php.</p>\n<p>What to test now:</p>\n<p>Open https://ella.floridaalterations.com/backend/ and confirm the new Open Documents and View Logs links are there.</p>\n<p>Open https://ella.floridaalterations.com/backend/documents.php, create a markdown doc, edit it, upload a file, and delete one document.</p>","file_name":null,"stored_name":null,"mime_type":null,"file_size_bytes":null,"storage_path":null,"is_deleted":"0","created_at":"2026-04-02 15:00:01","updated_at":"2026-04-02 15:00:01"}
documents · delete
2026-04-02 12:40:59 · cli
backend_document #1
Context
[]
Before
{"backend_document_id":"1","document_type":"markdown","title":"Backend Smoke Test","slug":"backend-smoke-test","summary_text":null,"content_markdown":"# Test\n\n- one\n- two","content_html":"<h1>Test</h1>\n<ul>\n<li>one</li>\n<li>two</li>\n</ul>","file_name":null,"stored_name":null,"mime_type":null,"file_size_bytes":null,"storage_path":null,"is_deleted":"0","created_at":"2026-04-02 12:40:21","updated_at":"2026-04-02 12:40:21"}
Changed Data
{"deleted":true}
documents · create
2026-04-02 12:40:21 · anonymous
backend_document #1
Context
{"document_type":"markdown"}
Before
[]
Changed Data
{"backend_document_id":"1","document_type":"markdown","title":"Backend Smoke Test","slug":"backend-smoke-test","summary_text":null,"content_markdown":"# Test\n\n- one\n- two","content_html":"<h1>Test</h1>\n<ul>\n<li>one</li>\n<li>two</li>\n</ul>","file_name":null,"stored_name":null,"mime_type":null,"file_size_bytes":null,"storage_path":null,"is_deleted":"0","created_at":"2026-04-02 12:40:21","updated_at":"2026-04-02 12:40:21"}
← Prev 1 19 20 21