Data Gibberish

Data Gibberish

Share this post

Data Gibberish
Data Gibberish
The Data Engineer's Guide to Mastering Systems Design
Copy link
Facebook
Email
Notes
More

The Data Engineer's Guide to Mastering Systems Design

Practical Insights to Become an Elite Data Systems Architect

Yordan Ivanov's avatar
Yordan Ivanov
Jun 26, 2024
∙ Paid
37

Share this post

Data Gibberish
Data Gibberish
The Data Engineer's Guide to Mastering Systems Design
Copy link
Facebook
Email
Notes
More
5
4
Share
Data Engineer designing a data system

Designing robust data platforms is a complex challenge in software engineering. The mix of technologies and evolving best practices can be overwhelming.

In this article, you and I will explore key concepts of data systems design. We'll cover scalability, reliability, security, and cost-effectiveness.

I'll show you real-world examples and discuss common pitfalls. You'll get practical tips to improve your data engineering skills.

Sound good? Then, let’s get started!

Reading time: 13 minutes


I'm looking for a Senior Data Engineer to join my team on a fixed-term contract with the potential to go permanent. This is your chance to work remotely from anywhere in Bulgaria!

Do you possess solid data/software engineering skills, experience with AWS (or similar), and a deep understanding of systems design principles?

You might be the perfect fit for this role!

Already typing "interested" in the comments? Do this instead:

  • Reply to this email 📧

  • Send me a DM on LinkedIn! 💬

  • Apply directly using the link below. 🚀

Let's work together


🧱 Introduction: The Foundation of Data Systems Design

In data engineering, systems design is the process of creating data architectures. These architectures must be scalable, reliable, and easy to maintain. They must handle the growing demands of modern businesses.

Mastering systems design is crucial. It enables you to create efficient and reliable data pipelines.

Loading...

By developing solid systems design skills, you'll be able to:

  • Design data architectures that can grow with your organisation

  • Ensure data quality and consistency across systems

  • Implement robust data processing pipelines

  • Optimise resource usage and reduce costs

  • Tackle complex data challenges and provide innovative solutions

Let's look at an example. Imagine you're asked to build a data pipeline that takes user clickstream data from multiple sources, processes it in real time, and stores it for analytics. How would you approach this?

First, think about the volume and speed of the data. Is it a few gigabytes per day or terabytes per hour? Based on that, you'd choose the right technologies. Apache Kafka for real-time data ingestion and AWS Kinesis for stream processing.

Next, you design the pipeline architecture. Make sure it can scale as data volume grows. Put in place fault-tolerance mechanisms to handle failures smoothly.

Finally, think about maintainability. How can you make the pipeline modular and easy to update? Maybe by using Airflow for orchestration and version control for managing pipeline code.

By thinking through these aspects, you're already on your way to becoming an elite data systems architect. But there's more to explore!

Data Gibberish is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

📈 Scalability: Building Systems That Grow

Scalability is a fundamental aspect of data systems design. It allows your data architecture to grow seamlessly. All without compromising performance.

Scalability in data systems means handling increasing amounts of data or users. It does this without significant performance issues. It's crucial because data volumes and processing needs often grow rapidly.

To implement scalable data architectures:

  • Use data sharding: Distribute large datasets across multiple databases using a shard key.

  • Use columnar storage: Use columnar databases or files like Apache Parquet for analytical workloads. They improve query performance on large datasets.

  • Implement data streaming: Use technologies like Apache Kafka. They enable real-time data ingestion and processing at scale.

  • Use distributed computing: Leverage frameworks like Apache Spark or Daft. They allow parallel processing of large-scale data.

  • Design for eventual consistency: In distributed systems, prioritise availability over immediate consistency. Use techniques like conflict-free replicated data types (CRDTs).

Let's consider this scenario. Suppose you're working on a data platform for a big social media company. As user engagement skyrockets, the data volume explodes. How do you ensure your system can handle this growth?

One approach is to use a cloud-based data warehouse like Snowflake or Redshift. These platforms are designed to scale horizontally. They let you process huge amounts of data efficiently.

But scalability isn't just about throwing more hardware at the problem. It's also about designing smart data partitioning and indexing strategies. For example, you might partition user data by region. Create indexes on frequently queried fields to optimise query performance.

Take Uber, for instance. They built a highly scalable data platform. It uses Kafka, HDFS, and Pinot. By carefully designing their data architecture, they were able to scale smoothly.

VuTrinh.
How does Uber build real-time infrastructure to handle petabytes of data every day?
My name is Vu Trinh, and I am a data engineer. I’m trying to make my life less dull by spending time learning and researching “how it works“ in the data engineering field. Here is a place where I share everything I’ve learned. Not subscribe yet? Here you go…
Read more
a year ago · 26 likes · Vu Trinh

💪 Reliability: Ensuring Consistent Performance

Reliability is a critical part of data systems design. It ensures that your data pipeline runs smoothly. Even after unexpected failures or traffic spikes.

Reliability in data systems means making sure your system performs consistently, and It can recover from failures. It's critical because data loss or downtime can severely affect businesses.

Thank you for reading Data Gibberish. This post is public so feel free to share it.

Share

To implement fault-tolerant and highly available systems:

  • Use data replication: Implement multi-region replication in databases like Aurora. They provide high availability.

  • Implement data validation: Use tools like Great Expectations. They validate data quality and catch issues early.

  • Use change data capture: Implement CDC with tools like Debezium. They reliably capture data changes across systems.

  • Implement idempotent data processing: Design your data transformations to be idempotent. This allows for safe retries.

  • Use data versioning: Implement data versioning in your data lake. Track changes and enable easy rollbacks.

Imagine you're managing a data system for a financial institution. Any downtime or data inconsistencies could have severe consequences. How do you build a reliable architecture?

One key strategy is to implement redundancy and failover mechanisms. For example, you might set up multiple Kafka clusters across different AWS availability zones. This ensures that the others can pick up the slack if one goes down.

Another crucial aspect is data integrity. Implementing data validation checks, such as schema enforcement, can help. Consider tools like dbt and SQLMesh. They catch issues early and prevent them from spreading downstream.

Take Yelp, for example. They use the Cassandra Source Connector to stream data from their database. They also use Kafka and Flink to build their data pipeline.

https://engineeringblog.yelp.com/2019/12/cassandra-source-connector-part-1.html

🧰 Maintainability: Creating Sustainable Systems

Maintainability is a crucial consideration in systems design. It allows your architecture to evolve without becoming a tangled mess of spaghetti code.

Data Gibberish
Stop Writing Spaghetti Code: Instead, Make it SOLID
In your relentless pursuit of delivering value in the data engineering realm, you often find yourself entangled in the web of code you weave. This infamous spaghetti code eats up your valuable time. It leaves you with a concoction that's hard to understand, maintain, or scale. Imagine needing a straight line but being handed a knotted string instead…
Read more
2 years ago · 1 like · Yordan Ivanov

Maintainability in data systems refers to how easily a system can be modified. It's crucial for long-term success because requirements change. Systems must evolve.

To implement practices for easy system updates and management:

  • Use data catalogues: Implement data discovery tools like Atlas or Amundsen. They make data assets easily discoverable.

  • Implement data lineage: Use tools like OpenLineage or Marquez to track data lineage. This makes it easier to manage dependencies.

  • Use schema evolution: Implement schema evolution techniques in your data lake. Handle changing data structures without breaking queries.

  • Implement data quality monitoring: Use tools like Synq or Soda. They continuously monitor data quality and alert on issues.

  • Use data pipeline orchestration: Implement workflow management tools like Airflow or Prefect. They manage complex data pipelines.

Picture this: you inherited a legacy data pipeline. It's been patched together over the years. It's a maze of scripts with no clear documentation or structure. Sound familiar? I bet it does!

To avoid such scenarios, focus on creating maintainable architectures from the start. This means writing clean, well-documented code. Using modular design principles. Following best practices like version control and CI/CD.

Data Gibberish
Unlock the Power of Jenkins: Master Your dbt CI/CD Processes Easily
Data engineering is like a complex chess game where every decision impacts our projects and organisations. It demands skill and the right tools. You're here to learn how Jenkins' CI/CD can make your dbt projects more efficient, reliable, and automated. Whether new to Jenkins or looking to deepen your knowledge, you're in the right place…
Read more
a year ago · 19 likes · 5 comments · Yordan Ivanov

For example, let's say you're building a data transformation pipeline using dbt. Instead of writing complex SQL queries, you might break them into smaller models. You'd also use Git for version control. Implement automated testing to catch any issues.

Airbnb, for instance, built a highly maintainable data infrastructure. They use Airflow for orchestration and a microservices architecture.

VuTrinh.
GroupBy #40: Data Infrastructure at Airbnb
This is GroupBy, the weekly compiled resources for data engineers. Not subscribed yet? Here you go: 👋 Hi, my name is Vu Trinh, a data engineer. I share my lesson and excellent resources to read in this newsletter. Hope this issue finds you well…
Read more
a year ago · 8 likes · Vu Trinh

🔐 Security: Protecting Data and Ensuring Compliance

Security is non-negotiable. As a data engineer, you are responsible for designing systems that protect sensitive information. You must ensure compliance with relevant regulations.

Security in data systems design involves protecting data from unauthorised access. It ensures compliance with relevant regulations. It's critical because data breaches can lead to financial losses, legal issues and loss of trust.

To implement robust security measures:

  • Implement data masking: Use dynamic data masking in databases to protect sensitive data.

  • Use column-level encryption: Implement column-level encryption in data warehouses like Snowflake and BigQuery. Protect specific sensitive fields.

  • Implement data access auditing: Use tools like CloudTrail. Maintain detailed data access logs and changes.

  • Use data loss prevention (DLP): Implement DLP tools like Cloud DLP or Macie. They automatically discover sensitive data.

  • Implement data governance: Use data governance platforms like Collibra or Alation. They manage data policies and access controls.

Imagine you're working on a healthcare data platform. Patient privacy is paramount. Any breaches could have severe legal consequences. How do you design a secure architecture?

Data Gibberish is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Yordan Ivanov
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More