Compare commits

..

4 Commits

Author SHA1 Message Date
e71c83538f Add README documentation
Document project overview, features, database schema, configuration,
commands, and getting started guide.
2026-04-02 08:21:55 +02:00
7a172c6fdb Simplify database schema: remove card_type, card_id, add card_number
Refactor the database schema to better model the data relationships:

Schema changes:
- Removed cards.card_type (redundant, identical to card_number)
- Removed transactions.card_id (unnecessary indirection)
- Added transactions.card_number (stores card number for all transactions)
- Made cards.customer_id NOT NULL (every card must belong to a customer)
- Made transactions.customer_id nullable (NULL for anonymized transactions)

Import logic changes:
- Only create cards for known customers (transactions with customer_number)
- Store card_number for ALL transactions (including anonymized)
- Skip cards/customer creation for anonymized transactions

Additional changes:
- Add 'db reset' command to drop and recreate database
- Update migration file with new schema

This simplifies queries and better reflects the data model:
- Cards table: authoritative mapping of card_number -> customer_id
- Transactions table: stores all raw data including anonymized cards
- Customer relationship via JOIN on card_number for known customers
2026-04-02 08:15:05 +02:00
cd46368f79 Add multi-environment support for database configuration
Introduces separate databases and config files for dev, test, and prod
environments. The application now defaults to production, with --env flag
to specify alternative environments.

Changes:
- Update config.rs to support env-based loading (config.toml -> config.<env>.toml -> config.example.toml)
- Add Env enum (Prod, Dev, Test) with database name mapping
- Add --env flag to CLI commands (defaults to prod)
- Add 'db setup' command to create database and schema
- Split migrations into env-specific database creation and shared schema
- Update .gitignore to track config.example.toml but ignore config.toml and config.<env>.toml files
- Update config.example.toml as a template with placeholder values
- Delete 001_initial_schema.sql, replaced by 002_schema.sql + env-specific files

Config loading order:
  1. config.toml (local override)
  2. config.<env>.toml (environment-specific)
  3. config.example.toml (fallback)

Database names:
  - prod: rusty_petroleum
  - dev:  rusty_petroleum_dev
  - test: rusty_petroleum_test

Usage:
  cargo run -- db setup --env dev       # Setup dev database
  cargo run -- import data.csv --env dev # Import to dev
  cargo run -- db setup                # Setup prod (default)
  cargo run -- import data.csv         # Import to prod (default)
2026-04-02 07:09:06 +02:00
9daa186ff6 Add MariaDB database support for storing transaction data
Introduces a new database layer to persist CSV transaction data in MariaDB,
enabling both invoicing and sales reporting queries. This replaces the
previous file-to-file-only processing.

Changes:
- Add sqlx, tokio, toml, anyhow, bigdecimal dependencies to Cargo.toml
- Create config module for TOML-based configuration (database credentials)
- Create db module with connection pool, models, and repository
- Create commands module with 'import' subcommand for CSV ingestion
- Refactor main.rs to use subcommand architecture (import/generate)
- Add migration SQL file for manual database schema creation

Schema (3 tables):
- customers: customer_number, card_report_group (1=fleet, 3/4=retail)
- cards: card_number, card_type, customer_id (nullable for anonymous)
- transactions: full transaction data with FK to cards/customers

Usage:
  cargo run -- import <csv-file>   # Import to database
  cargo run -- generate <csv> <dir>  # Generate HTML invoices (unchanged)

Configuration:
  cp config.example.toml config.toml  # Edit with database credentials
  mysql < migrations/001_initial_schema.sql  # Create database first
2026-04-02 06:33:38 +02:00
18 changed files with 1209 additions and 93 deletions

3
.gitignore vendored
View File

@@ -5,3 +5,6 @@ output/
*.swp *.swp
*.lock *.lock
/input /input
config.toml
config.dev.toml
config.test.toml

View File

@@ -5,6 +5,11 @@ edition = "2021"
[dependencies] [dependencies]
askama = "0.15.5" askama = "0.15.5"
chrono = "0.4.44" chrono = { version = "0.4.44", features = ["serde"] }
csv = "1.4.0" csv = "1.4.0"
serde = "1.0.228" serde = { version = "1.0.228", features = ["derive"] }
sqlx = { version = "0.8", features = ["runtime-tokio", "mysql", "chrono", "bigdecimal"] }
tokio = { version = "1", features = ["full"] }
toml = "0.8"
anyhow = "1"
bigdecimal = { version = "0.4", features = ["serde"] }

184
README.md
View File

@@ -1,2 +1,186 @@
# rusty-petroleum # rusty-petroleum
A petroleum transaction invoice generator with MariaDB backend.
## Overview
This project processes petroleum/fuel station transaction data from CSV files and generates customer invoices. It stores transaction data in MariaDB for both invoicing and sales reporting.
## Features
- **CSV Import**: Import transaction data from fuel station CSV files into MariaDB
- **Invoice Generation**: Generate HTML invoices from CSV data (file-to-file mode)
- **Multi-Environment**: Separate databases for development, testing, and production
- **Sales Reporting**: Query transactions by customer, product, date range
## Project Structure
```
rusty-petroleum/
├── Cargo.toml # Rust dependencies
├── config.example.toml # Config template
├── migrations/ # SQL schema files
│ ├── 001_dev.sql
│ ├── 001_test.sql
│ ├── 001_prod.sql
│ └── 002_schema.sql
├── input/ # CSV input files
├── output/ # Generated invoices
├── src/
│ ├── main.rs # CLI entry point
│ ├── config.rs # Configuration loading
│ ├── db/ # Database layer
│ │ ├── connection.rs
│ │ ├── models.rs
│ │ └── repository.rs
│ ├── commands/ # CLI commands
│ │ ├── db.rs # db setup/reset
│ │ └── import.rs # CSV import
│ └── invoice_generator.rs
└── templates/ # HTML invoice templates
```
## Database Schema
### customers
| Column | Type | Description |
|--------|------|-------------|
| id | INT | Primary key |
| customer_number | VARCHAR | Unique customer identifier |
| card_report_group | TINYINT | Customer classification (1=fleet, 3/4=retail) |
### cards
| Column | Type | Description |
|--------|------|-------------|
| id | INT | Primary key |
| card_number | VARCHAR | Unique card identifier |
| customer_id | INT | FK to customers |
### transactions
| Column | Type | Description |
|--------|------|-------------|
| id | BIGINT | Primary key |
| transaction_date | DATETIME | Transaction timestamp |
| batch_number | VARCHAR | Batch identifier |
| amount | DECIMAL | Transaction amount |
| volume | DECIMAL | Volume in liters |
| price | DECIMAL | Price per liter |
| quality_code | INT | Product code |
| quality_name | VARCHAR | Product name (95 Oktan, Diesel) |
| card_number | VARCHAR | Card used (including anonymized) |
| station | VARCHAR | Station ID |
| terminal | VARCHAR | Terminal ID |
| pump | VARCHAR | Pump number |
| receipt | VARCHAR | Receipt number |
| control_number | VARCHAR | Control/verification number |
| customer_id | INT | FK to customers (NULL for anonymized) |
## Configuration
Copy the example config and edit with your database credentials:
```bash
cp config.example.toml config.dev.toml # or config.test.toml
```
Edit `config.dev.toml`:
```toml
[database]
host = "localhost"
port = 3306
user = "your_user"
password = "your_password"
name = "rusty_petroleum_dev"
```
### Environment Config Loading
Config files are loaded in order:
1. `config.toml` (local override, gitignored)
2. `config.<env>.toml` (environment-specific, gitignored)
3. `config.example.toml` (fallback, tracked)
## Commands
```bash
# Database management
cargo run -- db setup --env <dev|test|prod> # Create database and schema
cargo run -- db reset --env <dev|test|prod> # Drop and recreate database
# Import data
cargo run -- import <csv-file> --env <dev|test|prod> # Import to database (default: prod)
# Generate invoices (file-to-file, no database)
cargo run -- generate <csv-file> <output-dir>
```
### Usage Examples
```bash
# Setup development database
cargo run -- db setup --env dev
# Import transactions to dev database
cargo run -- import input/409.csv --env dev
# Reset development database
cargo run -- db reset --env dev
# Generate HTML invoices from CSV
cargo run -- generate input/409.csv output/
```
## Current Status
### Implemented
- [x] Database schema for transactions, customers, cards
- [x] CSV import to MariaDB
- [x] Multi-environment support (dev/test/prod)
- [x] Configuration via TOML files
- [x] Invoice generation (HTML output)
- [x] Database setup/reset commands
### TODO
- [ ] Sales reporting queries (dashboard/API)
- [ ] Customer invoice retrieval from database
- [ ] Batch import across multiple CSV files
- [ ] Unit tests
- [ ] CI/CD pipeline
## Technology Stack
- **Language**: Rust (Edition 2021)
- **Database**: MariaDB
- **ORM**: sqlx (async MySQL)
- **Templating**: Askama (HTML templates)
- **Config**: TOML
## Getting Started
1. Install Rust (if not already installed)
```bash
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
```
2. Create database user and grant permissions in MariaDB
```sql
CREATE USER 'your_user'@'%' IDENTIFIED BY 'your_password';
GRANT ALL PRIVILEGES ON rusty_petroleum_dev.* TO 'your_user'@'%';
CREATE DATABASE rusty_petroleum_dev;
```
3. Setup configuration
```bash
cp config.example.toml config.dev.toml
# Edit config.dev.toml with your credentials
```
4. Setup database and import data
```bash
cargo run -- db setup --env dev
cargo run -- import input/409.csv --env dev
```
## License
See LICENSE file.

6
config.example.toml Normal file
View File

@@ -0,0 +1,6 @@
[database]
host = "localhost"
port = 3306
user = ""
password = ""
name = "rusty_petroleum"

2
migrations/001_dev.sql Normal file
View File

@@ -0,0 +1,2 @@
-- Create development database
CREATE DATABASE IF NOT EXISTS rusty_petroleum_dev;

2
migrations/001_prod.sql Normal file
View File

@@ -0,0 +1,2 @@
-- Create production database
CREATE DATABASE IF NOT EXISTS rusty_petroleum;

2
migrations/001_test.sql Normal file
View File

@@ -0,0 +1,2 @@
-- Create test database
CREATE DATABASE IF NOT EXISTS rusty_petroleum_test;

46
migrations/002_schema.sql Normal file
View File

@@ -0,0 +1,46 @@
-- Schema for rusty_petroleum
-- Run after creating the database
CREATE TABLE IF NOT EXISTS customers (
id INT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
customer_number VARCHAR(50) NOT NULL UNIQUE,
card_report_group TINYINT UNSIGNED NOT NULL DEFAULT 0,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
INDEX idx_customer_number (customer_number)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
CREATE TABLE IF NOT EXISTS cards (
id INT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
card_number VARCHAR(50) NOT NULL UNIQUE,
customer_id INT UNSIGNED NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
INDEX idx_card_number (card_number),
INDEX idx_customer_id (customer_id),
FOREIGN KEY (customer_id) REFERENCES customers(id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
CREATE TABLE IF NOT EXISTS transactions (
id BIGINT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
transaction_date DATETIME NOT NULL,
batch_number VARCHAR(20) NOT NULL,
amount DECIMAL(10,2) NOT NULL,
volume DECIMAL(10,3) NOT NULL,
price DECIMAL(8,4) NOT NULL,
quality_code INT NOT NULL,
quality_name VARCHAR(50) NOT NULL,
card_number VARCHAR(50) NOT NULL,
station VARCHAR(20) NOT NULL,
terminal VARCHAR(10) NOT NULL,
pump VARCHAR(10) NOT NULL,
receipt VARCHAR(20) NOT NULL,
control_number VARCHAR(20),
customer_id INT UNSIGNED NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
INDEX idx_transaction_date (transaction_date),
INDEX idx_batch_number (batch_number),
INDEX idx_customer_id (customer_id),
INDEX idx_card_number (card_number),
INDEX idx_station (station)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;

126
src/commands/db.rs Normal file
View File

@@ -0,0 +1,126 @@
use crate::config::Config;
use crate::db::Repository;
use sqlx::mysql::MySqlPoolOptions;
pub async fn run_db_setup(repo: &Repository, config: &Config) -> anyhow::Result<()> {
let env = &config.env;
println!("Setting up database for environment: {}", env.as_str());
println!("Database: {}", env.database_name());
let database_url = &config.database.connection_url();
let base_url = database_url.trim_end_matches(env.database_name());
let setup_pool = MySqlPoolOptions::new()
.max_connections(1)
.connect(base_url)
.await?;
println!("Creating database if not exists...");
sqlx::query(&format!(
"CREATE DATABASE IF NOT EXISTS {}",
env.database_name()
))
.execute(&setup_pool)
.await?;
println!("Database '{}' ready", env.database_name());
drop(setup_pool);
println!("Creating tables...");
sqlx::query(
r#"
CREATE TABLE IF NOT EXISTS customers (
id INT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
customer_number VARCHAR(50) NOT NULL UNIQUE,
card_report_group TINYINT UNSIGNED NOT NULL DEFAULT 0,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
INDEX idx_customer_number (customer_number)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
"#,
)
.execute(repo.pool())
.await?;
sqlx::query(
r#"
CREATE TABLE IF NOT EXISTS cards (
id INT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
card_number VARCHAR(50) NOT NULL UNIQUE,
customer_id INT UNSIGNED NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
INDEX idx_card_number (card_number),
INDEX idx_customer_id (customer_id),
FOREIGN KEY (customer_id) REFERENCES customers(id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
"#,
)
.execute(repo.pool())
.await?;
sqlx::query(
r#"
CREATE TABLE IF NOT EXISTS transactions (
id BIGINT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
transaction_date DATETIME NOT NULL,
batch_number VARCHAR(20) NOT NULL,
amount DECIMAL(10,2) NOT NULL,
volume DECIMAL(10,3) NOT NULL,
price DECIMAL(8,4) NOT NULL,
quality_code INT NOT NULL,
quality_name VARCHAR(50) NOT NULL,
card_number VARCHAR(50) NOT NULL,
station VARCHAR(20) NOT NULL,
terminal VARCHAR(10) NOT NULL,
pump VARCHAR(10) NOT NULL,
receipt VARCHAR(20) NOT NULL,
control_number VARCHAR(20),
customer_id INT UNSIGNED NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
INDEX idx_transaction_date (transaction_date),
INDEX idx_batch_number (batch_number),
INDEX idx_customer_id (customer_id),
INDEX idx_card_number (card_number),
INDEX idx_station (station)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
"#,
)
.execute(repo.pool())
.await?;
println!("Tables created successfully.");
println!("Database setup complete!");
Ok(())
}
pub async fn run_db_reset(config: &Config) -> anyhow::Result<()> {
let env = &config.env;
println!("Resetting database for environment: {}", env.as_str());
println!("Database: {}", env.database_name());
let database_url = &config.database.connection_url();
let base_url = database_url.trim_end_matches(env.database_name());
let setup_pool = MySqlPoolOptions::new()
.max_connections(1)
.connect(base_url)
.await?;
println!("Dropping database if exists...");
sqlx::query(&format!("DROP DATABASE IF EXISTS {}", env.database_name()))
.execute(&setup_pool)
.await?;
println!("Creating database...");
sqlx::query(&format!("CREATE DATABASE {}", env.database_name()))
.execute(&setup_pool)
.await?;
drop(setup_pool);
println!("Database '{}' reset complete!", env.database_name());
Ok(())
}

168
src/commands/import.rs Normal file
View File

@@ -0,0 +1,168 @@
use crate::db::models::{NewCard, NewCustomer, NewTransaction};
use crate::db::Repository;
use chrono::NaiveDateTime;
use csv::ReaderBuilder;
use std::collections::HashMap;
use std::fs::File;
use std::path::Path;
pub async fn run_import(csv_path: &Path, repo: &Repository) -> anyhow::Result<()> {
println!("Reading CSV file: {:?}", csv_path);
let file = File::open(csv_path)?;
let mut rdr = ReaderBuilder::new()
.delimiter(b'\t')
.has_headers(true)
.flexible(true)
.from_reader(file);
let mut transactions = Vec::new();
let mut seen_customers: HashMap<String, u8> = HashMap::new();
let mut seen_cards: HashMap<String, String> = HashMap::new();
for result in rdr.records() {
let record = result?;
if let Some(tx) = parse_record(&record)? {
if !tx.customer_number.is_empty() {
let card_report_group: u8 = tx.card_report_group_number.parse().unwrap_or(0);
if !seen_customers.contains_key(&tx.customer_number) {
seen_customers.insert(tx.customer_number.clone(), card_report_group);
}
if !seen_cards.contains_key(&tx.card_number) {
seen_cards.insert(tx.card_number.clone(), tx.customer_number.clone());
}
}
transactions.push(tx);
}
}
println!("Found {} transactions", transactions.len());
println!("Unique customers: {}", seen_customers.len());
println!("Unique known cards: {}", seen_cards.len());
println!("\nImporting customers...");
let mut customer_ids: HashMap<String, u32> = HashMap::new();
for (customer_number, card_report_group) in &seen_customers {
let new_customer = NewCustomer {
customer_number: customer_number.clone(),
card_report_group: *card_report_group,
};
let id = repo.upsert_customer(&new_customer).await?;
customer_ids.insert(customer_number.clone(), id);
println!(" Customer {} -> id {}", customer_number, id);
}
println!("\nImporting cards...");
let mut card_ids: HashMap<String, u32> = HashMap::new();
for (card_number, customer_number) in &seen_cards {
if let Some(&customer_id) = customer_ids.get(customer_number) {
let new_card = NewCard {
card_number: card_number.clone(),
customer_id,
};
let id = repo.upsert_card(&new_card).await?;
card_ids.insert(card_number.clone(), id);
println!(" Card {} -> customer {} -> id {}", card_number, customer_number, id);
}
}
println!("\nImporting transactions...");
let batch_size = 500;
let mut total_inserted = 0u64;
let mut batch: Vec<NewTransaction> = Vec::with_capacity(batch_size);
for tx in transactions {
let customer_id = customer_ids.get(&tx.customer_number).copied();
let new_tx = NewTransaction {
transaction_date: tx.date,
batch_number: tx.batch_number,
amount: tx.amount,
volume: tx.volume,
price: tx.price,
quality_code: tx.quality,
quality_name: tx.quality_name,
card_number: tx.card_number,
station: tx.station,
terminal: tx.terminal,
pump: tx.pump,
receipt: tx.receipt,
control_number: if tx.control_number.is_empty() { None } else { Some(tx.control_number) },
customer_id,
};
batch.push(new_tx);
if batch.len() >= batch_size {
let inserted = repo.insert_transactions_batch(&batch).await?;
total_inserted += inserted;
println!(" Inserted {} transactions (total: {})", inserted, total_inserted);
batch.clear();
}
}
if !batch.is_empty() {
let inserted = repo.insert_transactions_batch(&batch).await?;
total_inserted += inserted;
println!(" Inserted {} transactions (total: {})", inserted, total_inserted);
}
println!("\nDone! Imported {} transactions", total_inserted);
Ok(())
}
struct CsvTransaction {
date: NaiveDateTime,
batch_number: String,
amount: f64,
volume: f64,
price: f64,
quality: i32,
quality_name: String,
card_number: String,
customer_number: String,
station: String,
terminal: String,
pump: String,
receipt: String,
card_report_group_number: String,
control_number: String,
}
fn get_field(record: &csv::StringRecord, index: usize) -> &str {
record.get(index).unwrap_or("")
}
fn parse_record(record: &csv::StringRecord) -> anyhow::Result<Option<CsvTransaction>> {
let date_str = get_field(record, 0);
let date = NaiveDateTime::parse_from_str(date_str, "%Y-%m-%d %H:%M:%S")
.or_else(|_| NaiveDateTime::parse_from_str(date_str, "%m/%d/%Y %I:%M:%S %p"))
.map_err(|e| anyhow::anyhow!("Failed to parse date '{}': {}", date_str, e))?;
let amount: f64 = get_field(record, 2).parse().unwrap_or(0.0);
if amount <= 0.0 {
return Ok(None);
}
let customer_number = get_field(record, 9).to_string();
Ok(Some(CsvTransaction {
date,
batch_number: get_field(record, 1).to_string(),
amount,
volume: get_field(record, 3).parse().unwrap_or(0.0),
price: get_field(record, 4).parse().unwrap_or(0.0),
quality: get_field(record, 5).parse().unwrap_or(0),
quality_name: get_field(record, 6).to_string(),
card_number: get_field(record, 7).to_string(),
customer_number,
station: get_field(record, 10).to_string(),
terminal: get_field(record, 11).to_string(),
pump: get_field(record, 12).to_string(),
receipt: get_field(record, 13).to_string(),
card_report_group_number: get_field(record, 14).to_string(),
control_number: get_field(record, 15).to_string(),
}))
}

5
src/commands/mod.rs Normal file
View File

@@ -0,0 +1,5 @@
pub mod db;
pub mod import;
pub use db::{run_db_reset, run_db_setup};
pub use import::run_import;

137
src/config.rs Normal file
View File

@@ -0,0 +1,137 @@
use std::fs;
use std::path::Path;
#[derive(Debug, Clone, Default, PartialEq)]
pub enum Env {
#[default]
Prod,
Dev,
Test,
}
impl Env {
pub fn as_str(&self) -> &str {
match self {
Env::Prod => "prod",
Env::Dev => "dev",
Env::Test => "test",
}
}
pub fn database_name(&self) -> &str {
match self {
Env::Prod => "rusty_petroleum",
Env::Dev => "rusty_petroleum_dev",
Env::Test => "rusty_petroleum_test",
}
}
}
impl std::str::FromStr for Env {
type Err = String;
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s.to_lowercase().as_str() {
"prod" | "production" => Ok(Env::Prod),
"dev" | "development" => Ok(Env::Dev),
"test" | "testing" => Ok(Env::Test),
_ => Err(format!("Unknown environment: {}", s)),
}
}
}
#[derive(Debug, Clone)]
pub struct Config {
pub env: Env,
pub database: DatabaseConfig,
}
#[derive(Debug, Clone)]
pub struct DatabaseConfig {
pub host: String,
pub port: u16,
pub user: String,
pub password: String,
pub name: String,
}
impl DatabaseConfig {
pub fn connection_url(&self) -> String {
if self.password.is_empty() {
format!(
"mysql://{}@{}:{}/{}",
self.user, self.host, self.port, self.name
)
} else {
format!(
"mysql://{}:{}@{}:{}/{}",
self.user, self.password, self.host, self.port, self.name
)
}
}
}
impl Config {
pub fn load(env: Env) -> anyhow::Result<Self> {
let config_path = Path::new("config.toml");
let example_path = Path::new("config.example.toml");
let env_config_filename = format!("config.{}.toml", env.as_str());
let env_config_path = Path::new(&env_config_filename);
let path = if config_path.exists() {
config_path
} else if env_config_path.exists() {
env_config_path
} else if example_path.exists() {
example_path
} else {
return Err(anyhow::anyhow!(
"No configuration file found. Create config.example.toml or config.toml"
));
};
Self::load_from_path(path, env)
}
pub fn load_from_path(path: &Path, env: Env) -> anyhow::Result<Self> {
let contents = fs::read_to_string(path)
.map_err(|e| anyhow::anyhow!("Failed to read config file {:?}: {}", path, e))?;
let config: TomlConfig = toml::from_str(&contents)
.map_err(|e| anyhow::anyhow!("Failed to parse config file {:?}: {}", path, e))?;
let mut result: Config = config.into();
result.env = env;
Ok(result)
}
}
#[derive(serde::Deserialize)]
struct TomlConfig {
database: TomlDatabaseConfig,
}
#[derive(serde::Deserialize)]
struct TomlDatabaseConfig {
host: String,
port: u16,
user: String,
password: String,
name: String,
}
impl From<TomlConfig> for Config {
fn from(toml: TomlConfig) -> Self {
Config {
env: Env::default(),
database: DatabaseConfig {
host: toml.database.host,
port: toml.database.port,
user: toml.database.user,
password: toml.database.password,
name: toml.database.name,
},
}
}
}

11
src/db/connection.rs Normal file
View File

@@ -0,0 +1,11 @@
use sqlx::mysql::MySqlPoolOptions;
use sqlx::MySqlPool;
pub async fn create_pool(database_url: &str) -> anyhow::Result<MySqlPool> {
let pool = MySqlPoolOptions::new()
.max_connections(10)
.connect(database_url)
.await?;
Ok(pool)
}

7
src/db/mod.rs Normal file
View File

@@ -0,0 +1,7 @@
pub mod connection;
pub mod models;
pub mod repository;
pub use connection::create_pool;
pub use models::{Card, Customer, NewCard, NewCustomer, Transaction};
pub use repository::Repository;

72
src/db/models.rs Normal file
View File

@@ -0,0 +1,72 @@
use bigdecimal::BigDecimal;
use chrono::{DateTime, NaiveDateTime, Utc};
use serde::{Deserialize, Serialize};
use sqlx::FromRow;
#[derive(Debug, Clone, Serialize, Deserialize, FromRow)]
pub struct Customer {
pub id: u32,
pub customer_number: String,
pub card_report_group: u8,
pub created_at: DateTime<Utc>,
pub updated_at: DateTime<Utc>,
}
#[derive(Debug, Clone)]
pub struct NewCustomer {
pub customer_number: String,
pub card_report_group: u8,
}
#[derive(Debug, Clone, Serialize, Deserialize, FromRow)]
pub struct Card {
pub id: u32,
pub card_number: String,
pub customer_id: u32,
pub created_at: DateTime<Utc>,
pub updated_at: DateTime<Utc>,
}
#[derive(Debug, Clone)]
pub struct NewCard {
pub card_number: String,
pub customer_id: u32,
}
#[derive(Debug, Clone, Serialize, Deserialize, FromRow)]
pub struct Transaction {
pub id: u64,
pub transaction_date: NaiveDateTime,
pub batch_number: String,
pub amount: BigDecimal,
pub volume: BigDecimal,
pub price: BigDecimal,
pub quality_code: i32,
pub quality_name: String,
pub card_number: String,
pub station: String,
pub terminal: String,
pub pump: String,
pub receipt: String,
pub control_number: Option<String>,
pub customer_id: Option<u32>,
pub created_at: DateTime<Utc>,
}
#[derive(Debug, Clone)]
pub struct NewTransaction {
pub transaction_date: NaiveDateTime,
pub batch_number: String,
pub amount: f64,
pub volume: f64,
pub price: f64,
pub quality_code: i32,
pub quality_name: String,
pub card_number: String,
pub station: String,
pub terminal: String,
pub pump: String,
pub receipt: String,
pub control_number: Option<String>,
pub customer_id: Option<u32>,
}

224
src/db/repository.rs Normal file
View File

@@ -0,0 +1,224 @@
use crate::db::models::{Card, Customer, NewCard, NewCustomer, NewTransaction, Transaction};
use bigdecimal::BigDecimal;
use sqlx::MySqlPool;
pub struct Repository {
pool: MySqlPool,
}
impl Repository {
pub fn new(pool: MySqlPool) -> Self {
Self { pool }
}
pub fn pool(&self) -> &MySqlPool {
&self.pool
}
pub async fn upsert_customer(&self, customer: &NewCustomer) -> anyhow::Result<u32> {
sqlx::query(
r#"
INSERT INTO customers (customer_number, card_report_group)
VALUES (?, ?)
ON DUPLICATE KEY UPDATE
card_report_group = VALUES(card_report_group),
updated_at = CURRENT_TIMESTAMP
"#,
)
.bind(&customer.customer_number)
.bind(customer.card_report_group)
.execute(&self.pool)
.await?;
let row: (u32,) = sqlx::query_as(
"SELECT id FROM customers WHERE customer_number = ?",
)
.bind(&customer.customer_number)
.fetch_one(&self.pool)
.await?;
Ok(row.0)
}
pub async fn find_customer_by_number(
&self,
customer_number: &str,
) -> anyhow::Result<Option<Customer>> {
let result = sqlx::query_as(
"SELECT id, customer_number, card_report_group, created_at, updated_at
FROM customers
WHERE customer_number = ?",
)
.bind(customer_number)
.fetch_optional(&self.pool)
.await?;
Ok(result)
}
pub async fn upsert_card(&self, card: &NewCard) -> anyhow::Result<u32> {
sqlx::query(
r#"
INSERT INTO cards (card_number, customer_id)
VALUES (?, ?)
ON DUPLICATE KEY UPDATE
customer_id = VALUES(customer_id),
updated_at = CURRENT_TIMESTAMP
"#,
)
.bind(&card.card_number)
.bind(card.customer_id)
.execute(&self.pool)
.await?;
let row: (u32,) = sqlx::query_as(
"SELECT id FROM cards WHERE card_number = ?",
)
.bind(&card.card_number)
.fetch_one(&self.pool)
.await?;
Ok(row.0)
}
pub async fn find_card_by_number(&self, card_number: &str) -> anyhow::Result<Option<Card>> {
let result = sqlx::query_as(
"SELECT id, card_number, customer_id, created_at, updated_at
FROM cards
WHERE card_number = ?",
)
.bind(card_number)
.fetch_optional(&self.pool)
.await?;
Ok(result)
}
pub async fn insert_transactions_batch(
&self,
transactions: &[NewTransaction],
) -> anyhow::Result<u64> {
if transactions.is_empty() {
return Ok(0);
}
let mut query = String::from(
"INSERT INTO transactions (transaction_date, batch_number, amount, volume, price, quality_code, quality_name, card_number, station, terminal, pump, receipt, control_number, customer_id) VALUES ",
);
let mut values = Vec::new();
for tx in transactions {
values.push(format!(
"('{}', '{}', {}, {}, {}, {}, '{}', '{}', '{}', '{}', '{}', '{}', {}, {})",
tx.transaction_date.format("%Y-%m-%d %H:%M:%S"),
tx.batch_number,
tx.amount,
tx.volume,
tx.price,
tx.quality_code,
tx.quality_name.replace("'", "''"),
tx.card_number.replace("'", "''"),
tx.station,
tx.terminal,
tx.pump,
tx.receipt,
tx.control_number.as_ref().map(|s| format!("'{}'", s.replace("'", "''"))).unwrap_or_else(|| "NULL".to_string()),
tx.customer_id.map(|id| id.to_string()).unwrap_or_else(|| "NULL".to_string()),
));
}
query.push_str(&values.join(", "));
let result = sqlx::query(&query).execute(&self.pool).await?;
Ok(result.rows_affected())
}
pub async fn get_customer_invoice(
&self,
customer_number: &str,
start_date: &str,
end_date: &str,
) -> anyhow::Result<Vec<Transaction>> {
let result = sqlx::query_as(
r#"
SELECT t.id, t.transaction_date, t.batch_number, t.amount, t.volume, t.price,
t.quality_code, t.quality_name, t.card_number, t.station, t.terminal,
t.pump, t.receipt, t.control_number, t.customer_id, t.created_at
FROM transactions t
JOIN customers c ON t.customer_id = c.id
WHERE c.customer_number = ?
AND t.transaction_date >= ?
AND t.transaction_date <= ?
ORDER BY t.transaction_date
"#,
)
.bind(customer_number)
.bind(start_date)
.bind(end_date)
.fetch_all(&self.pool)
.await?;
Ok(result)
}
pub async fn get_sales_summary_by_product(
&self,
start_date: &str,
end_date: &str,
) -> anyhow::Result<Vec<ProductSummary>> {
let result = sqlx::query_as(
r#"
SELECT quality_name, COUNT(*) as tx_count, SUM(amount) as total_amount, SUM(volume) as total_volume
FROM transactions
WHERE transaction_date >= ? AND transaction_date <= ?
GROUP BY quality_name
"#,
)
.bind(start_date)
.bind(end_date)
.fetch_all(&self.pool)
.await?;
Ok(result)
}
pub async fn get_sales_summary_by_customer(
&self,
start_date: &str,
end_date: &str,
) -> anyhow::Result<Vec<CustomerSummary>> {
let result = sqlx::query_as(
r#"
SELECT c.customer_number, COUNT(*) as tx_count, SUM(t.amount) as total_amount, SUM(t.volume) as total_volume
FROM transactions t
JOIN customers c ON t.customer_id = c.id
WHERE t.transaction_date >= ? AND t.transaction_date <= ?
GROUP BY c.customer_number
ORDER BY total_amount DESC
"#,
)
.bind(start_date)
.bind(end_date)
.fetch_all(&self.pool)
.await?;
Ok(result)
}
}
#[derive(Debug, sqlx::FromRow)]
pub struct ProductSummary {
pub quality_name: String,
pub tx_count: i64,
pub total_amount: BigDecimal,
pub total_volume: BigDecimal,
}
#[derive(Debug, sqlx::FromRow)]
pub struct CustomerSummary {
pub customer_number: String,
pub tx_count: i64,
pub total_amount: BigDecimal,
pub total_volume: BigDecimal,
}

View File

@@ -68,7 +68,7 @@ impl Transaction {
} }
} }
pub fn read_csv_file(path: &Path) -> Result<Batch, Box<dyn std::error::Error>> { pub fn read_csv_file(path: &Path) -> anyhow::Result<Batch> {
let filename = path let filename = path
.file_name() .file_name()
.and_then(|n| n.to_str()) .and_then(|n| n.to_str())

View File

@@ -1,14 +1,18 @@
mod commands;
mod config;
mod db;
mod invoice_generator;
use askama::Template; use askama::Template;
use chrono::{NaiveDateTime, Utc}; use chrono::{NaiveDateTime, Utc};
use config::{Config, Env};
use csv::ReaderBuilder; use csv::ReaderBuilder;
use db::{create_pool, Repository};
use invoice_generator::{group_by_customer, read_csv_file, Customer};
use std::collections::HashMap; use std::collections::HashMap;
use std::env; use std::env;
use std::fs; use std::fs;
use std::path::Path; use std::path::{Path, PathBuf};
mod invoice_generator;
use invoice_generator::{group_by_customer, read_csv_file, Customer};
fn fmt(v: f64) -> String { fn fmt(v: f64) -> String {
format!("{:.2}", v) format!("{:.2}", v)
@@ -17,7 +21,7 @@ fn fmt(v: f64) -> String {
fn clean_csv_file( fn clean_csv_file(
input_path: &Path, input_path: &Path,
output_path: &Path, output_path: &Path,
) -> Result<String, Box<dyn std::error::Error>> { ) -> anyhow::Result<String> {
let file = fs::File::open(input_path)?; let file = fs::File::open(input_path)?;
let mut rdr = ReaderBuilder::new() let mut rdr = ReaderBuilder::new()
.delimiter(b'\t') .delimiter(b'\t')
@@ -229,19 +233,77 @@ struct CustomerTemplate {
generated_date: String, generated_date: String,
} }
fn main() -> Result<(), Box<dyn std::error::Error>> { fn parse_env_flag(args: &[String]) -> (Env, usize) {
for (i, arg) in args.iter().enumerate() {
if arg == "--env" && i + 1 < args.len() {
match args[i + 1].parse() {
Ok(env) => return (env, i),
Err(e) => {
eprintln!("Error: {}", e);
std::process::exit(1);
}
}
}
}
(Env::default(), 0)
}
fn remove_env_flags(args: &[String]) -> Vec<String> {
let (_, env_idx) = parse_env_flag(args);
let mut result = Vec::with_capacity(args.len());
for (i, arg) in args.iter().enumerate() {
if i == env_idx || (i == env_idx + 1 && args.get(env_idx) == Some(&"--env".to_string())) {
continue;
}
result.push(arg.clone());
}
result
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let args: Vec<String> = env::args().collect(); let args: Vec<String> = env::args().collect();
if args.len() != 3 { let (env, _) = parse_env_flag(&args);
eprintln!("Användning: {} <csv-fil> <utdatakatalog>", args[0]);
if args.len() < 2 {
print_usage(&args[0]);
std::process::exit(1); std::process::exit(1);
} }
let input_path = Path::new(&args[1]); match args[1].as_str() {
let base_output_dir = Path::new(&args[2]); "import" => {
let clean_args = remove_env_flags(&args);
if clean_args.len() != 3 {
eprintln!("Usage: {} import <csv-file> [--env <name>]", clean_args[0]);
std::process::exit(1);
}
let csv_path = PathBuf::from(&clean_args[2]);
if !csv_path.exists() {
eprintln!("Error: File not found: {:?}", csv_path);
std::process::exit(1);
}
println!("Environment: {}", env.as_str());
let config = Config::load(env)?;
let pool = create_pool(&config.database.connection_url()).await?;
let repo = Repository::new(pool);
commands::run_import(&csv_path, &repo).await?;
}
"generate" => {
let clean_args = remove_env_flags(&args);
if clean_args.len() != 4 {
eprintln!("Usage: {} generate <csv-file> <output-dir> [--env <name>]", clean_args[0]);
std::process::exit(1);
}
let input_path = Path::new(&clean_args[2]);
let base_output_dir = Path::new(&clean_args[3]);
if !input_path.exists() { if !input_path.exists() {
eprintln!("Fel: Filen hittades inte: {:?}", input_path); eprintln!("Error: File not found: {:?}", input_path);
std::process::exit(1); std::process::exit(1);
} }
@@ -251,7 +313,7 @@ fn main() -> Result<(), Box<dyn std::error::Error>> {
.unwrap_or("unknown") .unwrap_or("unknown")
.to_string(); .to_string();
println!("Konverterar {} till rensat format...", filename); println!("Converting {} to cleaned format...", filename);
let temp_cleaned_path = let temp_cleaned_path =
base_output_dir.join(format!("{}.temp.csv", filename.trim_end_matches(".txt"))); base_output_dir.join(format!("{}.temp.csv", filename.trim_end_matches(".txt")));
@@ -267,7 +329,7 @@ fn main() -> Result<(), Box<dyn std::error::Error>> {
)?; )?;
println!( println!(
"Konverterade {} transaktioner", "Converted {} transactions",
fs::read_to_string(output_dir.join(format!("{}.csv", batch_number)))? fs::read_to_string(output_dir.join(format!("{}.csv", batch_number)))?
.lines() .lines()
.count() .count()
@@ -275,7 +337,7 @@ fn main() -> Result<(), Box<dyn std::error::Error>> {
); );
let batch = read_csv_file(&output_dir.join(format!("{}.csv", batch_number)))?; let batch = read_csv_file(&output_dir.join(format!("{}.csv", batch_number)))?;
println!("Laddade {} transaktioner", batch.transactions.len()); println!("Loaded {} transactions", batch.transactions.len());
let first_date = batch.transactions.first().map(|t| t.date).unwrap(); let first_date = batch.transactions.first().map(|t| t.date).unwrap();
let last_date = batch.transactions.last().map(|t| t.date).unwrap(); let last_date = batch.transactions.last().map(|t| t.date).unwrap();
@@ -315,13 +377,67 @@ fn main() -> Result<(), Box<dyn std::error::Error>> {
.unwrap(); .unwrap();
let filename = format!("customer_{}.html", customer_num); let filename = format!("customer_{}.html", customer_num);
fs::write(output_dir.join(&filename), customer_html)?; fs::write(output_dir.join(&filename), customer_html)?;
println!("Genererade {}", filename); println!("Generated {}", filename);
} }
println!( println!(
"\nGenererade {} kundfakturor i {:?}", "\nGenerated {} customer invoices in {:?}",
customer_count, output_dir customer_count, output_dir
); );
}
"db" => {
let clean_args = remove_env_flags(&args);
if clean_args.len() < 3 {
eprintln!("Usage: {} db <subcommand> [--env <name>]", clean_args[0]);
eprintln!("Subcommands:");
eprintln!(" setup Create database and schema");
eprintln!(" reset Drop and recreate database");
std::process::exit(1);
}
println!("Environment: {}", env.as_str());
let config = Config::load(env)?;
match clean_args[2].as_str() {
"setup" => {
let pool = create_pool(&config.database.connection_url()).await?;
let repo = Repository::new(pool);
commands::run_db_setup(&repo, &config).await?;
}
"reset" => {
commands::run_db_reset(&config).await?;
}
_ => {
eprintln!("Unknown db subcommand: {}", clean_args[2]);
eprintln!("Subcommands:");
eprintln!(" setup Create database and schema");
eprintln!(" reset Drop and recreate database");
std::process::exit(1);
}
}
}
"help" | "--help" | "-h" => {
print_usage(&args[0]);
}
_ => {
eprintln!("Unknown command: {}", args[1]);
print_usage(&args[0]);
std::process::exit(1);
}
}
Ok(()) Ok(())
} }
fn print_usage(program: &str) {
eprintln!("Usage: {} <command> [arguments]", program);
eprintln!();
eprintln!("Commands:");
eprintln!(" import <csv-file> [--env <name>] Import CSV data to database (default: prod)");
eprintln!(" generate <csv> <dir> Generate HTML invoices from CSV");
eprintln!(" db setup [--env <name>] Create database and schema (default: prod)");
eprintln!(" db reset [--env <name>] Drop and recreate database (default: prod)");
eprintln!(" help Show this help message");
eprintln!();
eprintln!("Environments: prod (default), dev, test");
}