Picture by Writer
# Introduction
In case you’ve been working with knowledge in Python, you have nearly definitely used pandas. It has been the go-to library for knowledge manipulation for over a decade. However just lately, Polars has been gaining severe traction. Polars guarantees to be sooner, extra memory-efficient, and extra intuitive than pandas. However is it price studying? And the way completely different is it actually?
On this article, we’ll examine pandas and Polars side-by-side. You may see efficiency benchmarks, and be taught the syntax variations. By the top, you can make an knowledgeable resolution in your subsequent knowledge venture.
You could find the code on GitHub.
# Getting Began
Let’s get each libraries put in first:
pip set up pandas polars
Word: This text makes use of pandas 2.2.2 and Polars 1.31.0.
For this comparability, we’ll additionally use a dataset that is giant sufficient to see actual efficiency variations. We’ll use Faker to generate take a look at knowledge:
Now we’re prepared to start out coding.
# Measuring Pace By Studying Massive CSV Recordsdata
Let’s begin with one of the frequent operations: studying a CSV file. We’ll create a dataset with 1 million rows to see actual efficiency variations.
First, let’s generate our pattern knowledge:
import pandas as pd
from faker import Faker
import random
# Generate a big CSV file for testing
faux = Faker()
Faker.seed(42)
random.seed(42)
knowledge = {
'user_id': vary(1000000),
'title': [fake.name() for _ in range(1000000)],
'electronic mail': [fake.email() for _ in range(1000000)],
'age': [random.randint(18, 80) for _ in range(1000000)],
'wage': [random.randint(30000, 150000) for _ in range(1000000)],
'division': [random.choice(['Engineering', 'Sales', 'Marketing', 'HR', 'Finance'])
for _ in vary(1000000)]
}
df_temp = pd.DataFrame(knowledge)
df_temp.to_csv('large_dataset.csv', index=False)
print("✓ Generated large_dataset.csv with 1M rows")
This code creates a CSV file with sensible knowledge. Now let’s examine studying speeds:
import pandas as pd
import polars as pl
import time
# pandas: Learn CSV
begin = time.time()
df_pandas = pd.read_csv('large_dataset.csv')
pandas_time = time.time() - begin
# Polars: Learn CSV
begin = time.time()
df_polars = pl.read_csv('large_dataset.csv')
polars_time = time.time() - begin
print(f"Pandas learn time: {pandas_time:.2f} seconds")
print(f"Polars learn time: {polars_time:.2f} seconds")
print(f"Polars is {pandas_time/polars_time:.1f}x sooner")
Output when studying the pattern CSV:
Pandas learn time: 1.92 seconds
Polars learn time: 0.23 seconds
Polars is 8.2x sooner
This is what’s taking place: We time how lengthy it takes every library to learn the identical CSV file. Whereas pandas makes use of its conventional single-threaded CSV reader, Polars mechanically parallelizes the studying throughout a number of CPU cores. We calculate the speedup issue.
On most machines, you may see Polars is 2-5x sooner at studying CSVs. This distinction turns into much more important with bigger recordsdata.
# Measuring Reminiscence Utilization Throughout Operations
Pace is not the one consideration. Let’s examine how a lot reminiscence every library makes use of. We’ll carry out a sequence of operations and measure reminiscence consumption. Please pip set up psutil in the event you do not have already got it in your working setting:
import pandas as pd
import polars as pl
import psutil
import os
import gc # Import rubbish collector for higher reminiscence launch makes an attempt
def get_memory_usage():
"""Get present course of reminiscence utilization in MB"""
course of = psutil.Course of(os.getpid())
return course of.memory_info().rss / 1024 / 1024
# — - Take a look at with Pandas — -
gc.accumulate()
initial_memory_pandas = get_memory_usage()
df_pandas = pd.read_csv('large_dataset.csv')
filtered_pandas = df_pandas[df_pandas['age'] > 30]
grouped_pandas = filtered_pandas.groupby('division')['salary'].imply()
pandas_memory = get_memory_usage() - initial_memory_pandas
print(f"Pandas reminiscence delta: {pandas_memory:.1f} MB")
del df_pandas, filtered_pandas, grouped_pandas
gc.accumulate()
# — - Take a look at with Polars (keen mode) — -
gc.accumulate()
initial_memory_polars = get_memory_usage()
df_polars = pl.read_csv('large_dataset.csv')
filtered_polars = df_polars.filter(pl.col('age') > 30)
grouped_polars = filtered_polars.group_by('division').agg(pl.col('wage').imply())
polars_memory = get_memory_usage() - initial_memory_polars
print(f"Polars reminiscence delta: {polars_memory:.1f} MB")
del df_polars, filtered_polars, grouped_polars
gc.accumulate()
# — - Abstract — -
if pandas_memory > 0 and polars_memory > 0:
print(f"Reminiscence financial savings (Polars vs Pandas): {(1 - polars_memory/pandas_memory) * 100:.1f}%")
elif pandas_memory == 0 and polars_memory > 0:
print(f"Polars used {polars_memory:.1f} MB whereas Pandas used 0 MB.")
elif polars_memory == 0 and pandas_memory > 0:
print(f"Polars used 0 MB whereas Pandas used {pandas_memory:.1f} MB.")
else:
print("Can't compute reminiscence financial savings attributable to zero or adverse reminiscence utilization delta in each frameworks.")
This code measures the reminiscence footprint:
- We use the psutil library to trace reminiscence utilization earlier than and after operations
- Each libraries learn the identical file and carry out filtering and grouping
- We calculate the distinction in reminiscence consumption
Pattern output:
Pandas reminiscence delta: 44.4 MB
Polars reminiscence delta: 1.3 MB
Reminiscence financial savings (Polars vs Pandas): 97.1%
The outcomes above present the reminiscence utilization delta for each pandas and Polars when performing filtering and aggregation operations on the large_dataset.csv.
- pandas reminiscence delta: Signifies the reminiscence consumed by pandas for the operations.
- Polars reminiscence delta: Signifies the reminiscence consumed by Polars for a similar operations.
- Reminiscence financial savings (Polars vs pandas): This metric offers a share of how a lot much less reminiscence Polars used in comparison with pandas.
It’s normal for Polars to show reminiscence effectivity attributable to its columnar knowledge storage and optimized execution engine. Usually, you may see 30% to 70% enhancements from utilizing Polars.
Word: Nevertheless, sequential reminiscence measurements throughout the similar Python course of utilizing
psutil.Course of(...).memory_info().rsscan typically be deceptive. Python’s reminiscence allocator would not all the time launch reminiscence again to the working system instantly, so a ‘cleaned’ baseline for a subsequent take a look at may nonetheless be influenced by prior operations. For essentially the most correct comparisons, assessments ought to ideally be run in separate, remoted Python processes.
# Evaluating Syntax For Primary Operations
Now let’s take a look at how syntax differs between the 2 libraries. We’ll cowl the most typical operations you may use.
// Choosing Columns
Let’s choose a subset of columns. We’ll create a a lot smaller DataFrame for this (and subsequent examples).
import pandas as pd
import polars as pl
# Create pattern knowledge
knowledge = {
'title': ['Anna', 'Betty', 'Cathy'],
'age': [25, 30, 35],
'wage': [50000, 60000, 70000]
}
# Pandas method
df_pandas = pd.DataFrame(knowledge)
result_pandas = df_pandas[['name', 'salary']]
# Polars method
df_polars = pl.DataFrame(knowledge)
result_polars = df_polars.choose(['name', 'salary'])
# Different: Extra expressive
result_polars_alt = df_polars.choose([pl.col('name'), pl.col('salary')])
print("Pandas outcome:")
print(result_pandas)
print("nPolars outcome:")
print(result_polars)
The important thing variations right here:
- pandas makes use of bracket notation:
df[['col1', 'col2']] - Polars makes use of the
.choose()technique - Polars additionally helps the extra expressive
pl.col()syntax, which turns into highly effective for complicated operations
Output:
Pandas outcome:
title wage
0 Anna 50000
1 Betty 60000
2 Cathy 70000
Polars outcome:
form: (3, 2)
┌───────┬────────┐
│ title ┆ wage │
│ — - ┆ — - │
│ str ┆ i64 │
╞═══════╪════════╡
│ Anna ┆ 50000 │
│ Betty ┆ 60000 │
│ Cathy ┆ 70000 │
└───────┴────────┘
Each produce the identical output, however Polars’ syntax is extra specific about what you are doing.
// Filtering Rows
Now let’s filter rows:
# pandas: Filter rows the place age > 28
filtered_pandas = df_pandas[df_pandas['age'] > 28]
# Different Pandas syntax with question
filtered_pandas_alt = df_pandas.question('age > 28')
# Polars: Filter rows the place age > 28
filtered_polars = df_polars.filter(pl.col('age') > 28)
print("Pandas filtered:")
print(filtered_pandas)
print("nPolars filtered:")
print(filtered_polars)
Discover the variations:
- In pandas, we use boolean indexing with bracket notation. It’s also possible to use the
.question()technique. - Polars makes use of the
.filter()technique withpl.col()expressions. - Polars’ syntax reads extra like SQL: “filter the place column age is bigger than 28”.
Output:
Pandas filtered:
title age wage
1 Betty 30 60000
2 Cathy 35 70000
Polars filtered:
form: (2, 3)
┌───────┬─────┬────────┐
│ title ┆ age ┆ wage │
│ — - ┆ — - ┆ — - │
│ str ┆ i64 ┆ i64 │
╞═══════╪═════╪════════╡
│ Betty ┆ 30 ┆ 60000 │
│ Cathy ┆ 35 ┆ 70000 │
└───────┴─────┴────────┘
// Including New Columns
Now let’s add new columns to the DataFrame:
# pandas: Add a brand new column
df_pandas['bonus'] = df_pandas['salary'] * 0.1
df_pandas['total_comp'] = df_pandas['salary'] + df_pandas['bonus']
# Polars: Add new columns
df_polars = df_polars.with_columns([
(pl.col('salary') * 0.1).alias('bonus'),
(pl.col('salary') * 1.1).alias('total_comp')
])
print("Pandas with new columns:")
print(df_pandas)
print("nPolars with new columns:")
print(df_polars)
Output:
Pandas with new columns:
title age wage bonus total_comp
0 Anna 25 50000 5000.0 55000.0
1 Betty 30 60000 6000.0 66000.0
2 Cathy 35 70000 7000.0 77000.0
Polars with new columns:
form: (3, 5)
┌───────┬─────┬────────┬────────┬────────────┐
│ title ┆ age ┆ wage ┆ bonus ┆ total_comp │
│ — - ┆ — - ┆ — - ┆ — - ┆ — - │
│ str ┆ i64 ┆ i64 ┆ f64 ┆ f64 │
╞═══════╪═════╪════════╪════════╪════════════╡
│ Anna ┆ 25 ┆ 50000 ┆ 5000.0 ┆ 55000.0 │
│ Betty ┆ 30 ┆ 60000 ┆ 6000.0 ┆ 66000.0 │
│ Cathy ┆ 35 ┆ 70000 ┆ 7000.0 ┆ 77000.0 │
└───────┴─────┴────────┴────────┴────────────┘
This is what is going on:
- pandas makes use of direct column task, which modifies the DataFrame in place
- Polars makes use of
.with_columns()and returns a brand new DataFrame (immutable by default) - In Polars, you utilize
.alias()to call the brand new column
The Polars method promotes immutability and makes knowledge transformations extra readable.
# Measuring Efficiency In Grouping And Aggregating
Let us take a look at a extra helpful instance: grouping knowledge and calculating a number of aggregations. This code reveals how we group knowledge by division, calculate a number of statistics on completely different columns, and time each operations to see the efficiency distinction:
# Load our giant dataset
df_pandas = pd.read_csv('large_dataset.csv')
df_polars = pl.read_csv('large_dataset.csv')
# pandas: Group by division and calculate stats
import time
begin = time.time()
result_pandas = df_pandas.groupby('division').agg({
'wage': ['mean', 'median', 'std'],
'age': 'imply'
}).reset_index()
result_pandas.columns = ['department', 'avg_salary', 'median_salary', 'std_salary', 'avg_age']
pandas_time = time.time() - begin
# Polars: Identical operation
begin = time.time()
result_polars = df_polars.group_by('division').agg([
pl.col('salary').mean().alias('avg_salary'),
pl.col('salary').median().alias('median_salary'),
pl.col('salary').std().alias('std_salary'),
pl.col('age').mean().alias('avg_age')
])
polars_time = time.time() - begin
print(f"Pandas time: {pandas_time:.3f}s")
print(f"Polars time: {polars_time:.3f}s")
print(f"Speedup: {pandas_time/polars_time:.1f}x")
print("nPandas outcome:")
print(result_pandas)
print("nPolars outcome:")
print(result_polars)
Output:
Pandas time: 0.126s
Polars time: 0.077s
Speedup: 1.6x
Pandas outcome:
division avg_salary median_salary std_salary avg_age
0 Engineering 89954.929266 89919.0 34595.585863 48.953405
1 Finance 89898.829762 89817.0 34648.373383 49.006690
2 HR 90080.629637 90177.0 34692.117761 48.979005
3 Advertising 90071.721095 90154.0 34625.095386 49.085454
4 Gross sales 89980.433386 90065.5 34634.974505 49.003168
Polars outcome:
form: (5, 5)
┌─────────────┬──────────────┬───────────────┬──────────────┬───────────┐
│ division ┆ avg_salary ┆ median_salary ┆ std_salary ┆ avg_age │
│ — - ┆ — - ┆ — - ┆ — - ┆ — - │
│ str ┆ f64 ┆ f64 ┆ f64 ┆ f64 │
╞═════════════╪══════════════╪═══════════════╪══════════════╪═══════════╡
│ HR ┆ 90080.629637 ┆ 90177.0 ┆ 34692.117761 ┆ 48.979005 │
│ Gross sales ┆ 89980.433386 ┆ 90065.5 ┆ 34634.974505 ┆ 49.003168 │
│ Engineering ┆ 89954.929266 ┆ 89919.0 ┆ 34595.585863 ┆ 48.953405 │
│ Advertising ┆ 90071.721095 ┆ 90154.0 ┆ 34625.095386 ┆ 49.085454 │
│ Finance ┆ 89898.829762 ┆ 89817.0 ┆ 34648.373383 ┆ 49.00669 │
└─────────────┴──────────────┴───────────────┴──────────────┴───────────┘
Breaking down the syntax:
- pandas makes use of a dictionary to specify aggregations, which will be complicated with complicated operations
- Polars makes use of technique chaining: every operation is obvious and named
The Polars syntax is extra verbose but in addition extra readable. You possibly can instantly see what statistics are being calculated.
# Understanding Lazy Analysis In Polars
Lazy analysis is one in all Polars’ most useful options. This implies it would not execute your question instantly. As an alternative, it plans all the operation and optimizes it earlier than working.
Let’s examine this in motion:
import polars as pl
# Learn in lazy mode
df_lazy = pl.scan_csv('large_dataset.csv')
# Construct a fancy question
outcome = (
df_lazy
.filter(pl.col('age') > 30)
.filter(pl.col('wage') > 50000)
.group_by('division')
.agg([
pl.col('salary').mean().alias('avg_salary'),
pl.len().alias('employee_count')
])
.filter(pl.col('employee_count') > 1000)
.type('avg_salary', descending=True)
)
# Nothing has been executed but!
print("Question plan created, however not executed")
# Now execute the optimized question
import time
begin = time.time()
result_df = outcome.accumulate() # This runs the question
execution_time = time.time() - begin
print(f"nExecution time: {execution_time:.3f}s")
print(result_df)
Output:
Question plan created, however not executed
Execution time: 0.177s
form: (5, 3)
┌─────────────┬───────────────┬────────────────┐
│ division ┆ avg_salary ┆ employee_count │
│ — - ┆ — - ┆ — - │
│ str ┆ f64 ┆ u32 │
╞═════════════╪═══════════════╪════════════════╡
│ HR ┆ 100101.595816 ┆ 132212 │
│ Advertising ┆ 100054.012365 ┆ 132470 │
│ Gross sales ┆ 100041.01049 ┆ 132035 │
│ Finance ┆ 99956.527217 ┆ 132143 │
│ Engineering ┆ 99946.725458 ┆ 132384 │
└─────────────┴───────────────┴────────────────┘
Right here, scan_csv() would not load the file instantly; it solely plans to learn it. We chain a number of filters, groupings, and kinds. Polars analyzes all the question and optimizes it. For instance, it’d filter earlier than studying all knowledge.
Solely once we name .accumulate() does the precise computation occur. The optimized question runs a lot sooner than executing every step individually.
# Wrapping Up
As seen, Polars is tremendous helpful for knowledge processing with Python. It is sooner, extra memory-efficient, and has a cleaner API than pandas. That stated, pandas is not going anyplace. It has over a decade of improvement, an enormous ecosystem, and tens of millions of customers. For a lot of initiatives, pandas continues to be the proper selection.
Study Polars in the event you’re contemplating large-scale evaluation for knowledge engineering initiatives and the like. The syntax variations aren’t enormous, and the efficiency positive factors are actual. However maintain pandas in your toolkit for compatibility and fast exploratory work.
Begin by making an attempt Polars on a facet venture or an information pipeline that is working slowly. You may shortly get a really feel for whether or not it is proper in your use case. Blissful knowledge wrangling!
Bala Priya C is a developer and technical author from India. She likes working on the intersection of math, programming, knowledge science, and content material creation. Her areas of curiosity and experience embrace DevOps, knowledge science, and pure language processing. She enjoys studying, writing, coding, and low! At present, she’s engaged on studying and sharing her data with the developer neighborhood by authoring tutorials, how-to guides, opinion items, and extra. Bala additionally creates participating useful resource overviews and coding tutorials.

