Python Lesson pandas
Lesson outline
Introduction to pandas
Working with pandas objects
Loading and saving data in pandas
Dataframe and Series manipulation
Graphics in pandas
Basic statistical modeling
Working with time series
Exercises
Introduction to pandas
The pandas
library
Library designed to easily manipulate, clean, and analyze data sets of very different nature. Many idioms are taken from NumPy
, and the main difference is that pandas
allows for handling heterogeneous tabular data, while NumPy
only accepts homogeneous array data sets.
To import the pandas
library:
We now explain the two most important pandas
data structures: series and dataframes.
Pandas Series
A pandas series is a 1D array object that contains a sequence of values (homogeneous or not) and an associated data label array, its index. You can define a series from any array, having integers from zero to the number of elements minus one as indexes.
You can access in a separate way indexes and values with the attributes index
and value
You can provide the indexes labels -not necessarily integers- when creating the data structure
You can access an element by its index value, or provide a list of indexes and access a subset of values.
You can also perform NumPy-like operations on series and select elements in accordance with a given criterion
You can check the existence of a given index label with a syntax similar to the one used in dictionaries
You can also create directly a series from a dictionary
You can also explicitly control the index label ordering
As the Ceuta
index does not exist in the dictionary it is created with a NaN
value. You can check the occurrence with these values using the isnull
function or method
A very useful feature is that you can operate with pandas series and they will be aligned by index number.
You can also name the series and its index to facilitate its identification
You can also alter in-place the index labels of a series
And you can operate with the new series
Pandas Dataframes
The most known pandas data structure is the dataframe worked out to mimic the versatility of GNU R
equally named data structures. A dataframe can be considered a set of series sharing the same indexes. Therefore they have an index for the rows and a label for each columns. The columns can have data of different dtype and even data within a column can be of different type.
We will see that there are several different ways of constructing a dataframe. An usual one is to start with a dict of equal-length lists or NumPy arrays.
As can be checked in the output, the rows index is a default one, as in the Series case, and the columns kept the dictionary order. You can sort them at your best convenience providing using the columns
argument.
In fact if there are column names without associated data in the dictionary, the column is created with missing values (NaNs). And, as with series, you can provide a given index too.
You can also set the name attribute for the dataframe columns and index and they will be displayed with the dataframe
You can retrieve any dataframe row or column. Columns are obtained as a series using the column name or by attribute, though the second option only is valid for column names that are also valid Python variable names.
Note that the series and index names are conserved. Rows can be retrieved using the loc attribute.
The values that you retrieve are not a copy of the underlying data, but a view of them. Be aware that you modify them in place this will affect the original dataframe. There is a copy
method to obtain a copy of the data.
You can modify an existing column or create a new column with a default value by assignment
You can also provide as a value an array with the right number of elements
If instead of an array you provide a series, there are more flexibility as the indexes in the array will be aligned with the indexes in the dataframe and any missing index will be replaced by a NaN missing value.
You can add a boolean column using the NumPy syntax
You can delete the added column using the del
keyword
You can use a syntax similar to the one used in NumPy arrays to transpose a dataframe and exchange the indexes and columns roles.
Dataframe creation methods
There are many other ways of creating a dataframe, apart from the previous one, consisting on formatting your data as a dictionary of lists or NumPy vectors. We will provide some of them here:
From a nested dict of dicts. The advantage in this case is that the nested dictionaries keys are the dataframe indexes
You can also use a dict of series to build the dataframe.
Two dimensional ndarray
Index objects
The array, tuple, list, or any other structure passed to the dataframe constructor as the index for the data set is transformed into an index object
.
Those objects are immutable. You can create an index object using the pd.Index
keyword and pass it to a structure and also these objects can be shared among data structures.
An index can contain duplicate labels, and then a selection will select all occurrences of the repeated labels
Working with pandas objects
Reindexing series and dataframes
Reindexing a panda object creates a new object with different index labels. With a series it rearranges the data according to the new index. Any missing old index values is removed from the series and missing NaN
values are introduced for nonexistent index values.
When applied to a dataframe, reindex can modify rows, columns, or both. By default, rows are reindexed according to a given sequence in the same way than for series.
The reindex
function has several arguments. For example, the argument fill_value
allows for defining a default value for data associated to non-existent labels when reindexing
There is also an option to fill missing values when reindexing, which is quite useful when working with time series. The option name is method
and with the value ffill~(~bfill
) performs a forward(backward) filling.
You can also reindex columns using the columns
keyword
Notice the difference between reindexing and renaming columns either creating a new object or in-place editing the dataframe.
Deleting entries from series and dataframes
You can use the drop
method that returns a new object with the indicated entries deleted. This can be applied to series and dataframes.
The inplace=True
option allows for the editing of the object, avoiding the creation of a new object.
Series indexing, selection, and filtering
You can use a syntax similar to the one used by Numpy with pandas series, in particular you can use either the index values or integers.
You can apply filters to the series
You can also use slicing with integers of index labels, but notice the difference. The end point of the range is included in the labels case.
You can assign values using these methods
Dataframes indexing, selection, and filtering
By default, indexing refers to columns and you can set values
If you use slicing, this works over indexes
You can select data using a Boolean array
You can assign also using this syntax
You can also use loc
and iloc
for row indexing using axis labels or integers, respectively.
We can select two rows and two columns by label and by integer values as follows
Both ways of selecting elements work with slices
It is important to take into account that if you have an axis index containing integers, data selection will always be label-oriented to avoid possible ambiguities. In these cases is preferably to use loc
or iloc
.
Arithmetic with series and dataframes
The arithmetic between series with different indexes is performed in such a way that the final set of indexes is the union of the involved sets and missing values are introduced in the elements where the two series don't overlap and propagate through arithmetic operations.
An example with two series is
In the case of dataframes a double alignment is performed, in indexes and columns
If there are neither columns nor rows in common the resulting series will be only made of NaN
's. Using the add
method you can pass an argument to fill with a given value the non-overlapping elements of the dataframe, though if both elements are missing the result will still be a NaN
.
The following methods are available for series and dataframe arithmetics that have also reversed versions.
add
[~radd~]: additionsub
[~rsub~]: subtractiondiv
[~rdiv~]: divisionfloordiv
[~rfloordiv~]: floor divisionmul
[~rmul~]: multiplicationpow
[~rpow~]: exponentiation
You can mix series and dataframes in operations that are performed in a similar way to broadcasting in NumPy. By default the series index is matched against the dataframe's columns, broadcasting down the rows
You can also broadcast on the rows, matching the series index versus the dataframe index. In this case you need to use the method notation.
You can also perform arithmetic operations with dataframes
Mapping functions
NumPy universal functions (ufuncs) can be used in series and dataframes
The apply
method allows for applying a function to each row or column of a dataframe and giving as a result a a dataframe or a series with the corresponding indexes, depending on the function output. By default, the function is applied to each column.
You can apply a function that returns multiple values, as a series
You can apply also element-wise functions to a Series with map
and to a Dataframe with applymap
as follows
Sorting
The sort_index
method returns a new object, lexicographically sorted by row or column, and can be applied to Series,
and DataFrames. In this case you can also sort by column name.
If instead of sorting by indexes you need to sort by Series of DataFrame values the method is called sort_values
, notice that NaN
values are sent to the end of the series. In case of DataFrames you can use as sorting keys one or various columns of the DataFrame. In this case it is mandatory to include at least one column name as a by=
argument.
Related to sorting is ordering elements by its rank. This is accomplished with the rank
method. At first sight it can be surprising to find non integer positions, explained by the fact that the method breaks ties assigning the mean value to the group
There are other ways of breaking the ties, using the options min
or max
that assigns the minimum or maximum rank to the whole group; first
that takes into account the order of appearance of the element; or dense
, similar to min
, but with ranks always increasing by one, independently of the number of elements in the group.
Descriptive Statistics with Pandas
Pandas provides a set of useful methods to compute descriptive statistical quantities of your Series or DataFrame.
count
: Number of non-NaNs.describe
: Provides summary statistics for a Series or for each DataFrame column.min
,max
: Minimum and maximum valuesargmin
,argmax
: Index locations (integers) at which minimum or maximumvalue is obtained.
idxmin
,idxmax
: Index labels at which minimum or maximum value is obtained.quantile
: Sample quantile ranging from 0 to 1.sum
: Sum of values.mean
: Mean of values.median
: Arithmetic median (50% quantile) of values.mad
: Mean absolute deviation from mean value.prod
: Product of all values.var
: Variance of values.std
: Standard deviation of values.skew
: Skewness of values (third moment).kurt
: Excess Kurtosis of values (fourth moment - 3).cumsum
: Cumulative sum of values.cummin
,cummax
: Cumulative minimum or maximum of values, respectively.cumprod
: Cumulative product of values.diff
: Compute first arithmetic difference (useful for time series).pct_change
: Compute percent changes.
Some examples are provided for the normal distribution
We can also compute more elaborate statistics. As an example we load COVID19 data for Spain and analyze the data for total number of cases in Andalousian provinces. The first step is to download the data and build the requested dataframe
Now we can compute some statistics. For example the covariance and correlation matrices
We can compute also the percent change for a given period
We can compute the correlation with other Series of Dataframe column using corrwith
Other methods of interest related with Series and DataFrame description is unique
that provides an array of unique values and value_counts
that computes value frequencies
The isin
method performs a vectorized check of membership for a given set and is used to filter a Series or a DataFrame column down to a values subset
Related to this method is the Index.get_indexer
method that provides an index array of distinct values. In this example we got an array for dates when the number of cases is less than or equal to three.
Using apply
one can perform rather complex data manipulation in a concise way. We can, for example, count the distinct values for each column of our example dataframe
Loading and saving data in pandas
Pandas offers an impressive set of methods for reading/saving data, making possible to grapple with many different formats. We start first with text formatted files and include later other formats.
Text-formatted files
The main functions in Pandas to deal with text-formatted files are
read_csv
: Load delimited data from a file, URL, or file-like object. Thedefault delimiter is the comma.
read_table
: Load delimited data from a file, URL, or file-like object. Thedefault delimiter is the tab ('\t').
read_fwf
: Read data without delimiters in a fixed-width column format.read_clipboard
: Version of readtable that reads data from the clipboard. Useful for converting tables from web
pages.
We have already seen an example of read_csv
in action loading data from a URL. The large variety of formats and ways of encoding information into text files can be handled at the cost of having a plethora of options and modifiers to the previous functions. The most common ones are
path
: String indicating filesystem location, URL, or file-like object.sep
ordelimiter
: Character sequence or regular expression that marksfield separation in each row.
header
: Row number whose entries are used as column names. Defaults to thefirst row, and should be
None
if there is no header row.index_col
: Column numbers or names to use as the row index in the result;can be a single name/number or a list of them for a hierarchical index.
names
: List of column names for result, combine with header=None.skiprows
: Number of rows ignored at the beginning of the file toignore. Also it can be given as a list of row numbers (starting from 0) to skip.
na_values
: Sequence of values to replace with NA.comment
: Character(s) marking comments to split comments off the end of lines.parse_dates
: Attempt to parse data to datetime; False by default. If True,will attempt to parse all columns. Otherwise can specify a list of column
numbers or name to parse. If element of list is tuple or list, will combine multiple columns together and parse to date (e.g., if date/time split across two columns).
keep_date_col
: If joining columns to parse date, keep the joined columns; False by default.converters
: Dict containing column number of name mapping to functions(e.g., {'foo': f} would apply the function f to all values in the 'foo' column).
dayfirst
: When parsing potentially ambiguous dates, treat as internationalformat (e.g., 7/6/2012 -> June 7, 2012); False by default.
date_parser
: Function to use to parse dates.nrows
: Number of rows to read from beginning of file.iterator
: Return a TextParser object for reading file piecemeal.chunksize
: For iteration, size of file chunks.skip_footer
: Number of lines to ignore at end of file.verbose
: Print various parser output information, like the number ofmissing values placed in non-numeric columns.
encoding
: Text encoding for Unicode (e.g., 'utf-8' for UTF-8 encoded text).squeeze
: If the parsed data only contains one column, return a Series.thousands
: Separator for thousands (e.g., ',' or '.').
As an example we can read into a dataframe one of the monthly temperature files used in previous examples
By default the first row elements are used to label columns. We can choose our own labels and also transform the year into the dataframe index
Notice that in this case we are also forced to skip the first row.
In case that a file is very long, we may need to read it piecewise or iterate over files small portions using the nrows
argument.
The method to_csv
allows for writing data files as comma separated files. We can write the dataframe read above into a file. By default the separator used is a comma, but you can define another character as separator with the sep
option
By defauly row and column labels are included in the file. This can be disabled using the index=False and header=False options, respectively. Note that missing values appear as empty strings and can be denoted by a so called sentinel value using the option narep.
You can also save a number of columns and in an arbitrary order you define
Other file types
read_excel
: Read tabular data from an Excel XLS or XLSX file. Use packagesxlrd
andopenpyxl
.read_hdf
: Read HDF5 files written by pandas.read_html
: Read all tables found in the given HTML or XML document.read_json
: Read data from a JSON (JavaScript Object Notation) stringrepresentation. Library:
json
.read_msgpack
: Read pandas data encoded using the MessagePack binary format.read_pickle
: Read an arbitrary object stored in Python pickle format.read_sas
: Read a SAS dataset stored in one of the SAS system’s custom storage formats.read_sql
: Read the results of a SQL query (using SQLAlchemy) as a pandas DataFrame.read_stata
: Read a dataset from Stata file format.read_feather
: Read the Feather binary file format
Dataframe and series manipulation
Hierarchical indexing
You can define series and dataframes with more than one level of indexing paving the way to multidimensional data treatment in a lower dimensional form. This is an example for a series
You can access the elements making use of the different index levels
And you can also slice through the indexes making use of the loc
function and the slice
function or the Numpy syntax.
Note than in this case slices are inclusive.
Hierarchical index eases the reshaping of data. Making use of the unstack
method we can transform the previous series into a dataframe, filling with NaNs undefined values
The opposite operation is performed -unsurprisingly- by the stack
method, transforming a dataframe into a multi-index series.
The extension of hierarchical indexes to dataframes implies that both rows and columns can have multiple indexes. Let's build an example dataframe with hierarchical indexes in rows and columns.
You can name the indexes, helping to make your code more understandable
It is now easy to select a group of columns
You can change the order of the levels in rows and columns and also sort the data according to some criterion. If, for example, you want to exchange the levels order in both rows and columns you can use the swaplevel
function, taking into account that it returns a new object
The sort_index
command sorts levels according to a single index level
You can combine swaplevel
and sort_index
functions
In many statistics functions you have a level
option that allows for specifying the level at which the manipulation is intended
You can easily create new DataFrames with hierarchical indexing using the set_index
and reset_index
functions. The first one takes one or more columns as new indexes for a new DataFrame, while the second does the inverse operation. Depending on the drop
argument value, the columns will disappear or not.
Data Handling
Different Pandas objects can be combined using the functions merge
and concat
.
The merge
function connect rows in DataFrames based on one or more keys, in a way familiar to those used to work with relational databases.
By default an inner merge is performed and the intersection of the key set is used. Therefore the common elements in the columns indicated are selected and with the the accompanying elements from other columns. The default behavior can be changed using the left, right, or outer options
You can also use keys from various columns
When columns names are equal on the joined dataframe, pandas by default add a namex or namey suffix. This can be changed using the suffixes
option to merge
.
The concat
function concatenates or stacks together objects along a given axis, in a similar way to the NumPy concantenate
function. The labeled axes in Pandas results in different options when binding two Series or DataFrames. You can choose either to stack only the intersection or the union of the indexes, you can decide to discard the labels on the axes being binded or keep the concatenated data structures identifiable after merging.
If we use as arguments Series having no common index, the concat
function simply stacks them and creates a new Series
By default concat
acts on axis=0
, creating a new Series, the option axis=1
will turn the output into a DataFrame with each Series as a column and filling with NaN
the void values (an outer
join).
If there are common indexes and the operation is performed along axis = 0
there will be repeated index values while in the axis = 1
case the number of undefined NaN
values will be reduced.
In the axis=1
case, you can perform an inner
join using the argument join = inner
.
To keep track of the initial arguments you can use an hierarchical index in the concatenation axis with the keys
argument
If axis=1
the keys will be used as column names.
You can also concatenate DataFrames. Notice the difference between specifying axis = 1
or using the default. If you specify a keys
argument, it is used to define a hierarchical axis in a given axis.
You can also specify the key values as the keys of a dictionary that replaces the list argument.
In case the information on the index is not relevant you can use the ignore_index=True
option
Graphics in Pandas
Apart from the use of Matplotlib methods, Pandas has its own built-in methods to visualize data saved in Series and DataFrames and, on top of this, we may use the Seaborn
library, which modifies default Matplotlib
settings and allows for a fast and convenient way to make complex plots to infer possible statistical relations among data sets. In order to install Seaborn
you only need to run, in the environment where you intend to run the library
An interesting piece of advice is, once a plot get complex enough, to sketch by hand your plot -or your idea of the plot- before you begin coding it. That way, you might be able to identify how appropriate is the plot, as well as being able to clearly mark the objects and relationships that you want to be conveyed by the plot.
Very basic plots can be depicted using the built-in plot
method
The default behavior is to create a line for each column and label it with the column name. The plot
attribute has a series of methods to create different plot types. The default is line
.
You can customize the plot with some or all of the following options.
subplots: Plot each DataFrame column in a different subplot.
sharex: If
subplots=True
, share the same x axis, linking ticks and limits.sharey: If
subplots=True
, share the same y-axisfigsize: Size of figure to create. Tuple with (width, height).
title: Plot title as string.
legend: Add a subplot legend (True by default).
sortcolumns: Plot columns in alphabetical order; by default uses existing column order.
We can modify the previous plot using these options
You can also trivially plot histograms. For example
Also, one of the built-in Pandas option is the representation of Kernel Density Estimate plots, a kind of density plot, that provides an estimate of the probability distribution behind the data.
Instead of kde
one can use the modifier density
. Both are equivalent.
You plot data using the Seaborn
library. The first step is to select a particular Seaborn
style, if you want to change the default one. Possible options for the style
argument are: darkgrid
, whitegrid
, dark
, white
, and ticks
.
We first build some figures using the provided Seaborn datasets, that are Pandas dataframes
This is a Pandas dataframe with data about the bills and tips left by different parties in a restaurant. Seaborn command relplot
allows for easy and direct creation of complex relational plots, like the following:
In this case the total_bill
and tip
columns are plot as abscissa and ordinates, in different columns (Lunch
and Dinner
) defined by the time
column values, color and character are fixed according to the sex
column values, and the size of the character is determined by the party size in column size
.
A similar concise syntax is used in the Seaborn
lmplot
function, that includes in the scatterplot the results and uncertainty of a linear regression fit to the provided data.
Another interesting function is displot
that combines histograms and kernel density estimations to obtain an approach to the distribution of probability of the variable under study
You can also depict in a single step the empirical cumulative distribution function of the data using the kind="ecdf"
argument
In the case of categorical data, appropriate representations can be achieved with the catplot
command
Basic Statistical Modeling
We will use the statsmodel
library for the statistical treatment of datasets. You can get plenty of information about this library in the Statsmodel Homepage.
The first step is loading the necessary libraries, and the previous installation of the statsmodel
package in the environment using conda is required.
Then we use the tips
DataFrame from the previous section to perform some fits. We will work with four columns, and we create a new DataFrame with these data.
The next step is to create the design matrices for the statistical analysis, the endogenous or dependent matrix, YY
, and the exogenous or independent matrix, XX
. We are going to explore first the linear relationship between tip amounts and total bill amounts
This creates two DataFrames
And we can now describe the model and perform a fit
Using the predict
method new predicted values can be computed using the fit results.
In this case a constant term is assumed in the linear relationship (the intercept). To fix it to zero and perform a single parameter fit the syntax is
We can also include categorical data in a convenient way in the fit
And you can predict data in the categorical data case as
The most relevant information in the summary are the items that follow
Fit parameters: Can be accessed using
res.params
.R-squared: The correlation coefficient of determination. A statistical
measure of how well the regression line approximates the real data
points. Perfect fir for R-squared equal to one.
Adj. R-squared: The correlation coefficient adjusted based on the number of
observations and of residuals degrees-of-freedom.
P > |t|: P-value that the null-hypothesis that the coefficient = 0 is true. If it is less than the confidence level, often 0.05, it indicates that there is a statistically significant relationship between the term and the response.
[95.0% Conf. Interval]: The lower and upper values of the 95% confidence
interval.
Working with time series
Working with time series is a complex subject and mastering it requires time and dedication. Time series are ubiquitous and can be found in Chemistry, Physics, Ecology, Economics and Finance, Medicine, and Social Sciences. Pandas provides tools to work with fixed frequency data as well as with irregular time series.
Native Python provides a way to deal with time data with the modules datetime
, time
, and calendar
and the datetime
data type. For example
Time is stored down to the microsecond. The time difference between two different times is represented as a timedelta
object
The timedelta
function allows to shift a given time by some amount
You can format datetime
objects using str
or -for a given format- strftime
.
The possible format specification codes for strftime
are
%Y: Four-digit year
%y: Two-digit year
%m: Two-digit month
%d: Two-digit day
%H: Hour (24-hour clock)
%I: Hour (12-hour clock)
%M: Two-digit minute
%S: Second [00, 61] (seconds 60, 61 account for leap seconds)
%w: Weekday as integer [0 (Sunday)]
%U: Week number of the year [00, 53]; Sunday is considered the first day of the week, and days before the first Sunday of the year are “week 0”
%W: Week number of the year [00, 53]; Monday is considered the first day of the week, and days before the first Monday of the year are “week 0”
%z: UTC time zone offset as +HHMM or -HHMM; empty if time zone naive
%F: Shortcut for %Y-%m-%d (e.g., 2012-4-18)
%D: Shortcut for %m/%d/%y (e.g., 04/18/12)
The same formats are used to convert strings to dates using the strptime
function
To avoid the explicit format definition, the method parse.parse
, able to translate many different date string formats, can be used
Pandas work usually with time data arrays, either as axis index or as columns. It provides the to_datetime
method to parse many different date representations
It can handle gaps or missing elements in the time series introducing NaT
values
We will only give some hints on how to work with time series using the previous example for Covid-19 data in Spain. We first read the data from the public server and create a new column named "time" transforming the date into a timestamp with to_datetime
We now select the data for the total number of cases in the eight Andalousian provinces using as an index the time data. We start with the province of Almería (AL
)
A useful feature is to change the time frequency, downsampling or upsampling the data. In this case we will downsample to weekly and monthly frequencies using resample
method. This method first groups the data according to a given criterion and then calls an aggregation function.
In this case you are applying the same aggregation function to each column, but you can instead specify different functions for different columns using the agg
method.
We can now depict the data for different frequencies and using different formats
Exercises
Exercise 1: Read the different files with Cyprus towns temperatures provided in the
TData folder and build a dataframe combining all the information. The columns
should be the year and months and you can distinguish between data for the
different towns adding an extra column with the town name. Hint: the function
concat
can be very useful in this case.Exercise 2: A different way to combine the Cyprus towns temperature data provided in the
TData folder is to build a dataframe whose index is a hierarchical one, with
the year and month and there are as many columns as towns, labeled with the
town names. Hint: The
concat
andunstack
functions can be helpful in thiscase.
Exercise 3: Compute the correlation matrix between the temperatures in the provided
Cyprus cities from the previous exercise dataframe.
Exercise 4: We provide as an example data set the file meteodat.csv with an excerpt of
data with a 10 minute frequency from an automated meteorological station with
a span of two months (Jan and Feb 2014). Data are comma separated and you can
read them using
pd.read_csv
. The first column is the date and the secondthe time. Transform this data into a dataframe with an index of datetime,
compute dataframes with downsampling to hourly and daily average values for
temperature (
Tout
), pressure (Pressure
), relative humidity (H out
),wind speed (
Wind Speed
), and dew point temperature (Dew point
); sum ofrainfall (
Rain
); and maximum and minimum values of temperature (Tmax
andTmin
) and show the correlation between these variables at hourly and daily scales.
Last updated
Was this helpful?