GPU Cost Optimization for Analytics: Implementation Guide + Pitfalls

GPU Cost Optimization for Analytics: Implementation Guide + Pitfalls

GPU Cost Optimization for Analytics is not about buying cheaper hardware. It is about building a measurable pipeline that keeps GPUs busy on the right work, avoids silent waste, and protects accuracy while you scale. This guide gives you a practical implementation playbook: how costs really form (compute, memory, I/O, orchestration), where teams bleed money (underutilization, bad batching, over-precision, runaway retries), and how to monitor the signals that tell you your analytics workloads are drifting into expensive territory.

TL;DR

  • GPU spend usually explodes from idle time, bad batching, oversized models, over-precision, and data pipelines that starve the GPU.
  • Optimize in order: measure utilizationfix input throughputbatch + fusemixed precisionright-size instancesautoscalecap retries.
  • For data-heavy analytics, the bottleneck is often memory bandwidth or I/O, not raw GPU FLOPS.
  • Major pitfalls: “GPU always faster” thinking, ignoring warmup and compilation costs, hidden egress fees, inefficient distributed setups, and missing guardrails that cause runaway jobs.
  • Prerequisite reading if your analytics touches on-chain adversaries and MEV patterns: MEV Sandwich Detection at Scale. It explains why cost grows nonlinearly when you add adversarial data, backfills, and streaming requirements.
  • For structured learning and ongoing playbooks, use the AI Learning Hub, speed up execution with Prompt Libraries, and keep the stack current by Subscribing.
  • When on-chain analytics is part of your workflow, tools in AI Crypto Tools can complement your pipeline (data sources, monitoring, and research tooling).
Implementation Treat GPUs like a production line, not a magic box

GPUs are expensive because they are specialized. The moment your pipeline fails to feed them efficiently, you pay premium prices for idle silicon. Cost optimization is a loop: instrument, identify the bottleneck, apply the smallest change that increases useful utilization, and verify quality does not regress. This guide is built around that loop with real-world pitfalls and monitoring patterns.

If your analytics includes adversarial on-chain behaviors, start with prerequisite reading: MEV Sandwich Detection at Scale. It frames the workload patterns that break naive scaling and inflate compute.

Why GPU spend spikes in analytics workloads

“Analytics” is a wide word. It can mean dashboards, anomaly detection, embeddings, clustering, forecasting, simulation, backtests, streaming classification, or large-scale feature extraction. The common thread is that analytics teams want faster iteration and more throughput, and GPUs feel like the straightforward answer.

The hard truth is that most analytics pipelines are not GPU-bound. They are data-bound. A job can run on a GPU and still waste money if it spends most of its wall-clock time waiting for data, waiting for CPU preprocessing, waiting for network reads, or stalling on memory transfers. When that happens, adding more GPUs multiplies waste instead of increasing throughput.

GPU Cost Optimization for Analytics starts by understanding where cost actually comes from. Not invoices, not hourly rates, but the mechanics of time and bottlenecks.

What you are actually paying for

Allocated time
Wall-clock hours
Idle time counts the same as compute time if the instance is reserved or running.
Data movement
I/O and transfers
Disk reads, network pulls, CPU↔GPU copies, and egress can dominate costs.
Inefficiency
Underutilization
Low GPU utilization is the most common “silent burn” in analytics pipelines.
Over-precision
Needless FP32/FP64
Many analytics workloads tolerate mixed precision with no quality loss.
Retries
Runaway jobs
Poor guardrails cause exponential spend when jobs fail and retry endlessly.
Coordination
Distributed overhead
Bad partitioning and communication patterns can erase GPU speedups.

The myths that create expensive pipelines

  • Myth: GPUs automatically make analytics cheaper because they are faster.
    Reality: a faster kernel does not help if the GPU is waiting on data.
  • Myth: the largest GPU is always best.
    Reality: right-sizing often beats peak specs for throughput-per-dollar.
  • Myth: mixed precision is only for deep learning.
    Reality: many analytics kernels benefit from FP16/BF16 without measurable output changes.
  • Myth: distributed equals scalable.
    Reality: communication, shuffles, and skew can dominate at scale.
  • Myth: you can optimize later.
    Reality: cost patterns become infrastructure habits, and habits are expensive to reverse.
A practical loop for GPU cost optimization Instrument → identify bottleneck → apply smallest fix → verify quality → repeat. Instrument GPU util, mem, I/O, batch times, retries, queueing Find the real bottleneck Data loader, CPU prep, transfers, kernel, comms Apply smallest fix Batching, caching, mixed precision, right-size, autoscale Verify correctness Quality checks, drift tests, reproducibility, alerting Guardrails Caps, budgets, retry limits, circuit breakers, egress monitoring

Map your analytics workload before you touch the GPU

“GPU optimization” fails when teams optimize the wrong stage. The same code can be compute-heavy in one dataset and I/O-heavy in another. The same model can be cheap in training and expensive in inference if you deploy it incorrectly. You need a workload map first.

A workload map is a high-level breakdown of time, not complexity: where does wall-clock time go, and what resources are saturated when it goes there?

Typical stages in GPU-accelerated analytics

  • Ingest: pull from object storage, databases, RPC endpoints, or data lakes.
  • Parse: decompress, decode, tokenize, normalize, validate schemas.
  • Transform: feature extraction, joins, group-bys, rolling windows, graph transforms.
  • Model: training, inference, embedding generation, clustering, anomaly scoring.
  • Post-process: thresholds, aggregations, calibration, explainability artifacts.
  • Write-out: stores, indexes, dashboards, snapshots, caches.

What changes when you move work to GPU

A GPU is good at parallel work with high arithmetic intensity. That does not mean every operation gets faster. Some operations are limited by memory bandwidth. Others suffer from small batch sizes and kernel launch overhead. Some become slower because you increase data transfers.

So the goal is not “put everything on GPU.” The goal is: move the right parts to GPU and then keep the GPU fed.

First principles that prevent expensive mistakes

Principle 1: utilization beats peak speed

The simplest cost metric is: useful work per dollar. Useful work increases when GPU utilization is consistently high during the job, not when a single kernel runs fast.

Many teams buy bigger GPUs to “go faster,” then discover the GPU sits at 10 to 25 percent utilization due to slow preprocessing. The correct fix is usually: faster input pipeline, better batching, caching, and fewer transfers.

Principle 2: bandwidth and I/O win more than FLOPS in analytics

Analytics often involves large arrays, joins, scans, and transforms. These can be limited by memory bandwidth or I/O. If you do not measure these, you will misattribute slowdowns to “not enough GPUs.”

Principle 3: avoid the precision tax

In many analytics tasks, FP32 is more than you need. Mixed precision can yield large speedups and memory savings. The correct question is not “can we use FP16,” but “what parts require high precision.”

For example: accumulations can be sensitive, but intermediate operations can be lower precision. In deep learning inference, mixed precision is standard. In embedding generation for analytics, it is often safe and beneficial.

Principle 4: guardrails are part of optimization

A well-optimized pipeline can still become expensive if it fails and retries repeatedly. Or if a data backfill accidentally triggers a week-long run. Or if a new dataset doubles input size without anyone noticing.

Optimization without guardrails is not optimization. It is an invitation for expensive failures.

Pitfalls that quietly burn GPU budgets

The best way to save money is to avoid the mistakes that create large invoices in the first place. Below are the most common pitfalls, why they happen, and what to do instead.

Pitfall: GPU starvation from slow input pipelines

GPU starvation means the GPU is waiting on CPU preprocessing, disk reads, network reads, or serialization. You will see low utilization, spiky usage, and long gaps between batches.

Typical causes:

  • Single-threaded preprocessing or Python bottlenecks.
  • Small batch sizes and frequent kernel launches.
  • Compressed formats that require heavy CPU decode.
  • Remote storage reads without caching or prefetch.
  • Excessive CPU↔GPU transfers per batch.

Fix patterns:

  • Move preprocessing closer to the data or precompute features offline.
  • Increase batch size until you approach memory limits, then tune.
  • Use pinned memory and async transfers when applicable.
  • Cache hot datasets and reuse compiled kernels.
  • When possible, keep transformations on GPU to avoid round-trips.

Pitfall: “bigger batches always better”

Batching improves throughput, but uncontrolled batching can increase latency, memory pressure, and failure rates. In analytics, oversized batches can also hide tail latency and create uneven scheduling that wastes cluster resources.

The practical approach: find the batch size that stabilizes utilization while keeping memory headroom. Then monitor for drift.

Pitfall: hidden network and egress costs

GPU jobs often run in cloud environments where data movement is charged. Egress costs can rival compute costs if your pipeline repeatedly pulls the same dataset across zones or regions.

Common mistakes:

  • Reading large datasets across regions.
  • Writing intermediate artifacts to remote storage on every step.
  • Streaming uncompressed logs and debug outputs at high volume.
  • Repeatedly downloading model checkpoints per worker without shared caches.

Fix patterns:

  • Co-locate compute with storage.
  • Use shared caches or artifact stores in the same region.
  • Write fewer, larger artifacts rather than many small ones.
  • Turn on sampling for logs and cap debug output in production.

Pitfall: runaway retries and “zombie spend”

Zombie spend happens when jobs keep running or retrying long after they should have been killed. It often comes from: unstable dependencies, transient storage errors, unbounded retries, or missing timeouts.

If you only remember one guardrail: cap retries and set timeouts. A job that fails fast is cheap. A job that fails slowly is expensive.

Pitfall: distributed overhead that erases GPU benefits

Distributed analytics can look “more scalable,” but communication overhead can dominate if you partition incorrectly. You see this in: large shuffles, skewed keys, frequent all-reduces, or workloads where only a small part benefits from GPU acceleration.

Fix patterns:

  • Partition by a stable key that balances workload.
  • Reduce shuffle frequency by pushing down filters early.
  • Aggregate locally before global communication.
  • Prefer fewer, longer-lived workers over many short-lived workers when warmup is expensive.

Pitfall: optimizing cost and silently breaking correctness

GPU cost optimization can change numerical behavior. Mixed precision can introduce small differences. Aggressive caching can return stale data. Approximate methods can create bias.

You need “quality gates” that confirm the optimized pipeline still produces acceptable results. In analytics, that can mean: distribution checks, sample parity checks, drift measures, or accuracy metrics on a validation set.

Implementation guide: a cost-first GPU analytics playbook

This section is a practical sequence. If you follow it, you usually get meaningful savings without breaking the system. Each step includes what to measure and what to change.

1) Instrument the pipeline and capture a baseline

If you do not measure, you will optimize the wrong thing. You want a baseline that includes:

  • GPU utilization over time (average, p50, p95).
  • GPU memory usage and fragmentation patterns.
  • Data loader throughput and CPU utilization.
  • Batch time breakdown: preprocessing, transfer, compute, post-process.
  • Failure rate and retry counts.
  • End-to-end cost per unit of work (per million rows, per million events, per 1k inferences, per 1k embeddings).
# Lightweight sanity checks you can run during development # (Exact tooling differs by environment, but the idea is consistent.) # 1) Monitor GPU utilization # nvidia-smi --query-gpu=timestamp,utilization.gpu,utilization.memory,memory.used,memory.total --format=csv -l 1 # 2) Capture per-batch timings inside your code import time def timed_step(name, fn): t0 = time.time() out = fn() dt = time.time() - t0 return out, (name, dt) # Example usage: measure data prep vs transfer vs compute # prep_out, prep_t = timed_step("prep", lambda: prep_batch(...)) # x_gpu, tx_t = timed_step("transfer", lambda: x_cpu.to(device, non_blocking=True)) # y, comp_t = timed_step("compute", lambda: model(x_gpu)) # Collect stats and log aggregated summaries every N batches.

The objective is to answer one question: what is the GPU waiting on? Most of your future decisions become obvious once you know that.

2) Fix the input pipeline before touching model optimizations

If the GPU is idle, do not optimize kernels first. Fix the pipeline that feeds the GPU. This is where the biggest savings happen for analytics workloads.

Practical moves:

  • Cache hot data: avoid re-reading the same dataset across runs. Cache in a region-local store when possible.
  • Prefetch: keep at least one batch prepared while the GPU is computing on the current batch.
  • Parallelize preprocessing: multi-process or vectorize CPU transformations; avoid Python loops.
  • Reduce decode cost: store intermediate datasets in formats that are cheaper to read for your workflow.
  • Minimize transfers: keep tensors on GPU across multiple operations, reduce CPU↔GPU bounce.

3) Batch, fuse, and avoid tiny kernels

GPUs are sensitive to overhead. If you launch many small kernels, you can be overhead-bound even when utilization looks “fine.” The result: higher cost per unit of work.

In analytics, the batching opportunities often come from:

  • Embedding generation in large batches instead of one-by-one calls.
  • Vector similarity or scoring computations batched across queries.
  • Feature transforms grouped into fused operations where possible.
  • Micro-batch inference for streaming systems with controlled latency budgets.

4) Apply mixed precision where it is safe

Mixed precision means doing parts of computation in lower precision formats like FP16 or BF16, while keeping sensitive accumulations in FP32. The benefits are twofold: more throughput and lower memory usage.

Where mixed precision often works well in analytics:

  • Deep learning inference for embeddings and classification.
  • Similarity computations and matrix multiplications.
  • Feature transforms that tolerate minor numeric differences.

Where you should be cautious:

  • Very small difference thresholds (risk scores near decision boundaries).
  • Long accumulation chains (sums over massive ranges) without stable accumulation strategies.
  • Financial calculations requiring exact reproducibility for audits.
# Mixed precision inference pattern (illustrative) # Use autocast to reduce precision where safe, keep outputs in FP32 if needed. import torch model.eval() device = "cuda" def infer_batch(x): x = x.to(device, non_blocking=True) with torch.no_grad(): with torch.autocast(device_type="cuda", dtype=torch.float16): y = model(x) return y.float() # keep downstream logic stable

Mixed precision is not “set it and forget it.” You still need to validate output distributions and decision thresholds.

5) Right-size GPUs and choose the correct instance shape

Right-sizing is the fastest path to saving money after you fix utilization. A common failure mode: teams use large GPUs for workloads that fit on smaller GPUs with similar throughput.

The right approach is empirical: test a representative workload across two or three GPU sizes and calculate: cost per unit of work. If you only compare runtime, you can choose the wrong instance.

Right-sizing dimensions that matter:

  • VRAM: do you need large memory, or can you stream batches?
  • Memory bandwidth: critical for large array operations and transforms.
  • Interconnect: multi-GPU jobs can be communication-bound.
  • CPU and RAM: GPU jobs still need CPU for orchestration and preprocessing.

6) Use autoscaling and scheduling that matches workload shape

Analytics workloads are often bursty: backfills, daily pipelines, weekly rebuilds, ad hoc research spikes. Autoscaling prevents paying for idle clusters.

The key is choosing the right scaling signal:

  • Queue length and backlog age.
  • GPU utilization sustained over a window.
  • Latency SLOs for near-real-time pipelines.
  • Cost budgets that cap the maximum scale-out.

7) Enforce budgets, retry caps, and circuit breakers

This is where teams save themselves from catastrophic spend. Guardrails to implement:

  • Retry caps: hard limit on retries per task and per job.
  • Timeouts: kill stuck tasks and surface failure reasons quickly.
  • Budget-aware scheduling: do not start new runs if monthly budget is near cap.
  • Backfill controls: limit historical reprocessing and require explicit approvals for large backfills.
  • Egress monitoring: alert on unusual data movement.

Guardrails you should treat as non-negotiable

  • Hard caps on retries and exponential backoff with jitter.
  • Per-job timeouts with structured error reporting.
  • Per-pipeline budgets (monthly) and per-run budgets (single execution).
  • Automatic cancellation when inputs exceed expected size ranges.
  • Alerts on low utilization (idle GPU) and high failure rates.

The cost levers that matter and what they do

Use this table as a quick decision reference. It links a symptom (what you observe) to a cost lever (what you change) and the expected effect.

Symptom Likely bottleneck Best first lever Second lever Expected outcome
Low GPU util, high wall time Input pipeline Prefetch + parallel preprocessing Cache + format changes Higher throughput with same GPU count
High GPU util, slow progress Kernel efficiency Batching + fused ops Mixed precision Lower cost per unit of work
OOM errors or frequent spills VRAM pressure Reduce batch size, checkpointing Mixed precision Stability and fewer retries
Fast compute, slow end-to-end I/O or writes Co-locate storage, compress outputs Write fewer artifacts Shorter wall time, lower egress
Distributed job slower than single GPU Communication/shuffles Partitioning and skew fixes Aggregate locally first Distributed speedup becomes real
Costs jump unpredictably Runaway jobs Retry caps + budgets Circuit breakers Spend becomes bounded and predictable

Practical scenarios and what to optimize first

“Analytics” becomes clearer when you tie it to real pipeline patterns. Below are scenarios where GPU cost optimization shows up frequently, including how the bottleneck shifts.

Scenario: embedding generation for large corpora

Embeddings are a classic GPU analytics workload: high throughput, batch-friendly, and easily scaled. It is also a workload where teams accidentally waste money by running inference with small batches or repeatedly loading models.

Optimize in this order:

  • Model warmup and reuse: keep workers alive; avoid reloading weights constantly.
  • Batch sizing: increase batch size until utilization stabilizes, then validate output consistency.
  • Mixed precision inference: often safe, big memory savings.
  • Write strategy: write embeddings in larger blocks; avoid per-item writes.
  • Deduping and caching: do not re-embed identical texts or near-duplicates unless required.

Scenario: streaming classification or anomaly scoring

Streaming pipelines care about latency and steady cost. The biggest trap is running the GPU on tiny batches because “real time.” That can make the GPU expensive per inference.

The typical fix is micro-batching with a latency budget: batch for 50 to 200 milliseconds, then infer. You get most throughput benefits while keeping near-real-time behavior.

Scenario: backfills and historical rebuilds

Backfills are where GPU budgets die. A team runs a “one-time rebuild,” then the rebuild becomes a recurring habit. The pipeline grows quietly until it becomes a permanent monthly bill.

Fix patterns:

  • Set explicit backfill windows and rate limits.
  • Store intermediate results and resume from checkpoints.
  • Use spot or interruptible capacity when your pipeline can tolerate it.
  • Implement budgets that require approval for large backfills.

Scenario: on-chain analytics at scale (MEV, fraud, graph patterns)

On-chain analytics adds three cost multipliers: volume, adversarial data, and reprocessing requirements. You backfill, you update labels, you rebuild features, you run detection repeatedly. This is where your optimization and guardrails must be strong.

If this is your world, read prerequisite reading early: MEV Sandwich Detection at Scale. It reflects the workload patterns that create runaway compute if you do not control scope, streaming behavior, and evaluation.

For on-chain research tooling and data workflows, the directory at AI Crypto Tools can complement your internal pipeline and monitoring strategy.

What to monitor so costs do not drift back up

Optimization is not permanent. Pipelines drift: new data sources, new features, larger models, new preprocessing, new retries, new backfills. If you do not monitor, costs will creep back up.

Core signals that catch spend drift early

  • GPU utilization time series: alerts on sustained low utilization.
  • GPU memory usage: rising memory usage often predicts OOM retries.
  • Batch time breakdown: increasing prep time indicates CPU or I/O regressions.
  • Queue age: growing backlog means scaling or throughput is failing.
  • Retry rate: spikes usually precede big cost increases.
  • Cost per unit: the best metric; normalize spend by output volume.
  • Egress and storage I/O: unusual increases often indicate pipeline mistakes.

Quality gates that protect correctness

Cost optimization must not quietly degrade output quality. A pragmatic “quality gate” system includes:

  • Distribution checks on key features (mean/variance drift, quantiles).
  • Sample parity checks (same input sample yields comparable output).
  • Holdout accuracy checks for ML-based analytics.
  • Alerting on unusual output spikes (anomaly score distribution collapses or explodes).
Why monitoring matters: costs drift even after “optimization” Without alerts, small regressions accumulate into a big bill. With alerts, you correct early. No monitoring With alerts Week 1 Week 2 Week 3 Week 4 Week 5 Week 6

Tools and workflow that fit this topic

GPU cost optimization becomes much easier when you treat it as a repeatable practice. Your team should have a baseline learning path, reusable prompts for investigations, and a way to keep up with best practices as the ecosystem changes.

A clean learning path for teams

If your team is building analytics pipelines and gradually adopting GPU compute, the fastest way to reduce mistakes is structured learning:

  • Use the AI Learning Hub to build shared fundamentals across the team.
  • Use the Prompt Libraries to standardize investigations (bottleneck diagnosis, cost drift analysis, and incident response).
  • For ongoing implementation notes and playbooks, you can Subscribe.

When a GPU compute provider is materially relevant

Once your pipeline is instrumented and your workload shape is clear, you often need flexible GPU capacity for: backfills, batch embedding runs, large inference sweeps, or model experiments that are not worth owning hardware for. In those cases, a GPU compute marketplace can be useful.

One option you can explore is Runpod, especially when you want to scale up temporarily and shut down quickly. The key is to pair flexible compute with the guardrails in this guide so costs stay bounded.

When an on-chain analytics tool is relevant

If your “analytics” includes crypto-specific workflows, cost optimization is inseparable from data quality and validation. A practical pattern is: use your GPU pipeline for heavy computation (embeddings, clustering, detection), then use specialized data tooling to validate assumptions and enrich analysis.

If that matches your workflow, you can explore Nansen as one option for on-chain intelligence that pairs well with GPU-powered detection and research pipelines.

Make GPU spend predictable, not surprising

The teams that win at GPU cost optimization do the same boring things consistently: they measure bottlenecks, fix input throughput, batch effectively, use mixed precision safely, right-size instances, autoscale based on real signals, and enforce guardrails that prevent runaway spend. If you want more implementation playbooks and workflow templates, you can Subscribe.

Conclusion: optimize the system, then lock it in with monitoring

GPU Cost Optimization for Analytics is fundamentally a systems problem. You are optimizing a pipeline made of data movement, preprocessing, compute kernels, distributed coordination, and operational guardrails. If you only tune the GPU kernel, you often save almost nothing. If you instrument and fix the pipeline that feeds the GPU, you usually save a lot.

The safest path is consistent: measure utilization and batch timings, remove I/O bottlenecks, apply batching and mixed precision, right-size instances, add autoscaling, and enforce budgets and retry caps so failures never become catastrophic spend.

If your analytics touches on-chain patterns, adversarial behavior, or MEV detection, revisit prerequisite reading: MEV Sandwich Detection at Scale. It is a practical reminder that data and adversaries change workload shape, which changes cost.

For structured learning and ongoing playbooks, use the AI Learning Hub, speed up investigations with Prompt Libraries, and keep the stack current by Subscribing.

FAQs

What is the fastest way to cut GPU analytics costs?

The fastest way is usually increasing useful GPU utilization by fixing the input pipeline: parallelize preprocessing, prefetch batches, cache hot data, reduce CPU↔GPU transfers, and batch effectively. Most GPU waste is idle time, not slow kernels.

How do I know if my pipeline is GPU-bound or data-bound?

If GPU utilization is low or spiky and batch times are dominated by preprocessing or reads, you are data-bound. If GPU utilization is consistently high and compute time dominates, you are closer to GPU-bound. Always break batch time into prep, transfer, compute, and write-out.

Is mixed precision safe for analytics workloads?

Often yes, especially for deep learning inference (embeddings, classification) and matrix operations. You should validate output distributions and thresholds, and keep sensitive accumulations in higher precision if needed.

Why did scaling to multiple GPUs make my job slower?

Distributed overhead can dominate when you have heavy communication, large shuffles, skewed partitions, or frequent synchronization. Fix partitioning, push down filters early, aggregate locally, and measure communication time explicitly.

What guardrails prevent runaway GPU spend?

Retry caps, timeouts, per-run and monthly budgets, circuit breakers on abnormal input sizes, and alerts on low utilization and high failure rates. These controls stop “zombie spend” from turning small failures into huge invoices.

How does on-chain analytics change GPU cost optimization?

On-chain analytics often increases volume, backfill needs, and adversarial patterns that force repeated recalculation. This changes workload shape and can make costs grow nonlinearly if scope and guardrails are not strict. If this is your domain, prerequisite reading on MEV detection at scale is helpful.

What should I monitor weekly to keep costs stable?

GPU utilization, memory usage, batch time breakdown, queue age, retry rates, egress and storage I/O, and cost per unit of work. Add quality gates so optimization does not silently degrade correctness.

References

Official documentation and reputable sources for deeper reading:


Final reminder: the best cost optimization is repeatable. Instrument first, fix input starvation, batch wisely, use mixed precision safely, right-size resources, autoscale based on real signals, and enforce guardrails that bound spend. If your analytics involves on-chain MEV or adversarial patterns, revisit: MEV Sandwich Detection at Scale. For ongoing playbooks and workflow updates, you can Subscribe.

About the author: Wisdom Uche Ijika Verified icon 1
Founder @TokenToolHub | Web3 Research, Token Security & On-Chain Intelligence | Building Tools for Safer Crypto | Solidity & Smart Contract Enthusiast