{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Combining Datasets: concat and append" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Some of the most interesting studies of data come from combining different data sources.\n", "These operations can involve anything from very straightforward concatenation of two different datasets to more complicated database-style joins and merges that correctly handle any overlaps between the datasets.\n", "`Series` and ``DataFrame``s are built with this type of operation in mind, and Pandas includes functions and methods that make this sort of data wrangling fast and straightforward.\n", "\n", "Here we'll take a look at simple concatenation of `Series` and ``DataFrame``s with the `pd.concat` function; later we'll dive into more sophisticated in-memory merges and joins implemented in Pandas.\n", "\n", "We begin with the standard imports:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "tags": [] }, "outputs": [], "source": [ "import pandas as pd\n", "import numpy as np" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For convenience, we'll define this function, which creates a `DataFrame` of a particular form that will be useful in the following examples:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ABC
0A0B0C0
1A1B1C1
2A2B2C2
\n", "
" ], "text/plain": [ " A B C\n", "0 A0 B0 C0\n", "1 A1 B1 C1\n", "2 A2 B2 C2" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "def make_df(cols, ind):\n", " \"\"\"Quickly make a DataFrame\"\"\"\n", " data = {c: [str(c) + str(i) for i in ind]\n", " for c in cols}\n", " return pd.DataFrame(data, ind)\n", "\n", "# example DataFrame\n", "make_df('ABC', range(3))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In addition, we'll create a quick class that allows us to display multiple ``DataFrame``s side by side. The code makes use of the special `_repr_html_` method, which IPython/Jupyter uses to implement its rich object display:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "tags": [] }, "outputs": [], "source": [ "class display(object):\n", " \"\"\"Display HTML representation of multiple objects\"\"\"\n", " template = \"\"\"
\n", "

{0}

{1}\n", "
\"\"\"\n", " def __init__(self, *args):\n", " self.args = args\n", " \n", " def _repr_html_(self):\n", " return '\\n'.join(self.template.format(a, eval(a)._repr_html_())\n", " for a in self.args)\n", " \n", " def __repr__(self):\n", " return '\\n\\n'.join(a + '\\n' + repr(eval(a))\n", " for a in self.args)\n", " " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The use of this will become clearer as we continue our discussion in the following section." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Recall: Concatenation of NumPy Arrays\n", "\n", "Concatenation of `Series` and `DataFrame` objects behaves similarly to concatenation of NumPy arrays, which can be done via the `np.concatenate` function, as discussed in [The Basics of NumPy Arrays](02.02-The-Basics-Of-NumPy-Arrays.ipynb).\n", "Recall that with it, you can combine the contents of two or more arrays into a single array:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "array([1, 2, 3, 4, 5, 6, 7, 8, 9])" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "x = [1, 2, 3]\n", "y = [4, 5, 6]\n", "z = [7, 8, 9]\n", "np.concatenate([x, y, z])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The first argument is a list or tuple of arrays to concatenate.\n", "Additionally, in the case of multidimensional arrays, it takes an `axis` keyword that allows you to specify the axis along which the result will be concatenated:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "array([[1, 2, 1, 2],\n", " [3, 4, 3, 4]])" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "x = [[1, 2],\n", " [3, 4]]\n", "np.concatenate([x, x], axis=1)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Simple Concatenation with pd.concat" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The `pd.concat` function provides a similar syntax to `np.concatenate` but contains a number of options that we'll discuss momentarily:\n", "\n", "```python\n", "# Signature in Pandas v1.3.5\n", "pd.concat(objs, axis=0, join='outer', ignore_index=False, keys=None,\n", " levels=None, names=None, verify_integrity=False,\n", " sort=False, copy=True)\n", "```\n", "\n", "`pd.concat` can be used for a simple concatenation of `Series` or `DataFrame` objects, just as `np.concatenate` can be used for simple concatenations of arrays:" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/plain": [ "1 A\n", "2 B\n", "3 C\n", "4 D\n", "5 E\n", "6 F\n", "dtype: object" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "ser1 = pd.Series(['A', 'B', 'C'], index=[1, 2, 3])\n", "ser2 = pd.Series(['D', 'E', 'F'], index=[4, 5, 6])\n", "pd.concat([ser1, ser2])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It also works to concatenate higher-dimensional objects, such as ``DataFrame``s:" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/html": [ "
\n", "

df1

\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
AB
1A1B1
2A2B2
\n", "
\n", "
\n", "
\n", "

df2

\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
AB
3A3B3
4A4B4
\n", "
\n", "
\n", "
\n", "

pd.concat([df1, df2])

\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
AB
1A1B1
2A2B2
3A3B3
4A4B4
\n", "
\n", "
" ], "text/plain": [ "df1\n", " A B\n", "1 A1 B1\n", "2 A2 B2\n", "\n", "df2\n", " A B\n", "3 A3 B3\n", "4 A4 B4\n", "\n", "pd.concat([df1, df2])\n", " A B\n", "1 A1 B1\n", "2 A2 B2\n", "3 A3 B3\n", "4 A4 B4" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df1 = make_df('AB', [1, 2])\n", "df2 = make_df('AB', [3, 4])\n", "display('df1', 'df2', 'pd.concat([df1, df2])')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It's default behavior is to concatenate row-wise within the `DataFrame` (i.e., `axis=0`).\n", "Like `np.concatenate`, `pd.concat` allows specification of an axis along which concatenation will take place.\n", "Consider the following example:" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/html": [ "
\n", "

df3

\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
AB
0A0B0
1A1B1
\n", "
\n", "
\n", "
\n", "

df4

\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
CD
0C0D0
1C1D1
\n", "
\n", "
\n", "
\n", "

pd.concat([df3, df4], axis='columns')

\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ABCD
0A0B0C0D0
1A1B1C1D1
\n", "
\n", "
" ], "text/plain": [ "df3\n", " A B\n", "0 A0 B0\n", "1 A1 B1\n", "\n", "df4\n", " C D\n", "0 C0 D0\n", "1 C1 D1\n", "\n", "pd.concat([df3, df4], axis='columns')\n", " A B C D\n", "0 A0 B0 C0 D0\n", "1 A1 B1 C1 D1" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df3 = make_df('AB', [0, 1])\n", "df4 = make_df('CD', [0, 1])\n", "display('df3', 'df4', \"pd.concat([df3, df4], axis='columns')\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We could have equivalently specified ``axis=1``; here we've used the more intuitive ``axis='columns'``. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Duplicate Indices\n", "\n", "One important difference between `np.concatenate` and `pd.concat` is that Pandas concatenation *preserves indices*, even if the result will have duplicate indices!\n", "Consider this short example:" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/html": [ "
\n", "

x

\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
AB
0A0B0
1A1B1
\n", "
\n", "
\n", "
\n", "

y

\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
AB
0A2B2
1A3B3
\n", "
\n", "
\n", "
\n", "

pd.concat([x, y])

\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
AB
0A0B0
1A1B1
0A2B2
1A3B3
\n", "
\n", "
" ], "text/plain": [ "x\n", " A B\n", "0 A0 B0\n", "1 A1 B1\n", "\n", "y\n", " A B\n", "0 A2 B2\n", "1 A3 B3\n", "\n", "pd.concat([x, y])\n", " A B\n", "0 A0 B0\n", "1 A1 B1\n", "0 A2 B2\n", "1 A3 B3" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "x = make_df('AB', [0, 1])\n", "y = make_df('AB', [2, 3])\n", "y.index = x.index # make indices match\n", "display('x', 'y', 'pd.concat([x, y])')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice the repeated indices in the result.\n", "While this is valid within ``DataFrame``s, the outcome is often undesirable.\n", "`pd.concat` gives us a few ways to handle it." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Treating repeated indices as an error\n", "\n", "If you'd like to simply verify that the indices in the result of `pd.concat` do not overlap, you can include the `verify_integrity` flag.\n", "With this set to `True`, the concatenation will raise an exception if there are duplicate indices.\n", "Here is an example, where for clarity we'll catch and print the error message:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "ValueError: Indexes have overlapping values: Int64Index([0, 1], dtype='int64')\n" ] } ], "source": [ "try:\n", " pd.concat([x, y], verify_integrity=True)\n", "except ValueError as e:\n", " print(\"ValueError:\", e)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Ignoring the index\n", "\n", "Sometimes the index itself does not matter, and you would prefer it to simply be ignored.\n", "This option can be specified using the `ignore_index` flag.\n", "With this set to `True`, the concatenation will create a new integer index for the resulting `DataFrame`:" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/html": [ "
\n", "

x

\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
AB
0A0B0
1A1B1
\n", "
\n", "
\n", "
\n", "

y

\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
AB
0A2B2
1A3B3
\n", "
\n", "
\n", "
\n", "

pd.concat([x, y], ignore_index=True)

\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
AB
0A0B0
1A1B1
2A2B2
3A3B3
\n", "
\n", "
" ], "text/plain": [ "x\n", " A B\n", "0 A0 B0\n", "1 A1 B1\n", "\n", "y\n", " A B\n", "0 A2 B2\n", "1 A3 B3\n", "\n", "pd.concat([x, y], ignore_index=True)\n", " A B\n", "0 A0 B0\n", "1 A1 B1\n", "2 A2 B2\n", "3 A3 B3" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "display('x', 'y', 'pd.concat([x, y], ignore_index=True)')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Adding MultiIndex keys\n", "\n", "Another option is to use the `keys` option to specify a label for the data sources; the result will be a hierarchically indexed series containing the data:" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/html": [ "
\n", "

x

\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
AB
0A0B0
1A1B1
\n", "
\n", "
\n", "
\n", "

y

\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
AB
0A2B2
1A3B3
\n", "
\n", "
\n", "
\n", "

pd.concat([x, y], keys=['x', 'y'])

\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
AB
x0A0B0
1A1B1
y0A2B2
1A3B3
\n", "
\n", "
" ], "text/plain": [ "x\n", " A B\n", "0 A0 B0\n", "1 A1 B1\n", "\n", "y\n", " A B\n", "0 A2 B2\n", "1 A3 B3\n", "\n", "pd.concat([x, y], keys=['x', 'y'])\n", " A B\n", "x 0 A0 B0\n", " 1 A1 B1\n", "y 0 A2 B2\n", " 1 A3 B3" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "display('x', 'y', \"pd.concat([x, y], keys=['x', 'y'])\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can use the tools discussed in [Hierarchical Indexing](03.05-Hierarchical-Indexing.ipynb) to transform this multiply indexed `DataFrame` into the representation we're interested in." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Concatenation with Joins\n", "\n", "In the short examples we just looked at, we were mainly concatenating ``DataFrame``s with shared column names.\n", "In practice, data from different sources might have different sets of column names, and `pd.concat` offers several options in this case.\n", "Consider the concatenation of the following two ``DataFrame``s, which have some (but not all!) columns in common:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/html": [ "
\n", "

df5

\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ABC
1A1B1C1
2A2B2C2
\n", "
\n", "
\n", "
\n", "

df6

\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
BCD
3B3C3D3
4B4C4D4
\n", "
\n", "
\n", "
\n", "

pd.concat([df5, df6])

\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ABCD
1A1B1C1NaN
2A2B2C2NaN
3NaNB3C3D3
4NaNB4C4D4
\n", "
\n", "
" ], "text/plain": [ "df5\n", " A B C\n", "1 A1 B1 C1\n", "2 A2 B2 C2\n", "\n", "df6\n", " B C D\n", "3 B3 C3 D3\n", "4 B4 C4 D4\n", "\n", "pd.concat([df5, df6])\n", " A B C D\n", "1 A1 B1 C1 NaN\n", "2 A2 B2 C2 NaN\n", "3 NaN B3 C3 D3\n", "4 NaN B4 C4 D4" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df5 = make_df('ABC', [1, 2])\n", "df6 = make_df('BCD', [3, 4])\n", "display('df5', 'df6', 'pd.concat([df5, df6])')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The default behavior is to fill entries for which no data is available with NA values.\n", "To change this, we can adjust the `join` parameter of the `concat` function.\n", "By default, the join is a union of the input columns (`join='outer'`), but we can change this to an intersection of the columns using `join='inner'`:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/html": [ "
\n", "

df5

\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ABC
1A1B1C1
2A2B2C2
\n", "
\n", "
\n", "
\n", "

df6

\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
BCD
3B3C3D3
4B4C4D4
\n", "
\n", "
\n", "
\n", "

pd.concat([df5, df6], join='inner')

\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
BC
1B1C1
2B2C2
3B3C3
4B4C4
\n", "
\n", "
" ], "text/plain": [ "df5\n", " A B C\n", "1 A1 B1 C1\n", "2 A2 B2 C2\n", "\n", "df6\n", " B C D\n", "3 B3 C3 D3\n", "4 B4 C4 D4\n", "\n", "pd.concat([df5, df6], join='inner')\n", " B C\n", "1 B1 C1\n", "2 B2 C2\n", "3 B3 C3\n", "4 B4 C4" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "display('df5', 'df6',\n", " \"pd.concat([df5, df6], join='inner')\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Another useful pattern is to use the `reindex` method before concatenation for finer control over which columns are dropped:" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ABC
1A1B1C1
2A2B2C2
3NaNB3C3
4NaNB4C4
\n", "
" ], "text/plain": [ " A B C\n", "1 A1 B1 C1\n", "2 A2 B2 C2\n", "3 NaN B3 C3\n", "4 NaN B4 C4" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pd.concat([df5, df6.reindex(df5.columns, axis=1)])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The append Method\n", "\n", "Because direct array concatenation is so common, `Series` and `DataFrame` objects have an `append` method that can accomplish the same thing in fewer keystrokes.\n", "For example, in place of `pd.concat([df1, df2])`, you can use `df1.append(df2)`:" ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "collapsed": false, "jupyter": { "outputs_hidden": false } }, "outputs": [ { "data": { "text/html": [ "
\n", "

df1

\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
AB
1A1B1
2A2B2
\n", "
\n", "
\n", "
\n", "

df2

\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
AB
3A3B3
4A4B4
\n", "
\n", "
\n", "
\n", "

df1.append(df2)

\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
AB
1A1B1
2A2B2
3A3B3
4A4B4
\n", "
\n", "
" ], "text/plain": [ "df1\n", " A B\n", "1 A1 B1\n", "2 A2 B2\n", "\n", "df2\n", " A B\n", "3 A3 B3\n", "4 A4 B4\n", "\n", "df1.append(df2)\n", " A B\n", "1 A1 B1\n", "2 A2 B2\n", "3 A3 B3\n", "4 A4 B4" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "display('df1', 'df2', 'df1.append(df2)')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Keep in mind that unlike the `append` and `extend` methods of Python lists, the `append` method in Pandas does not modify the original object; instead it creates a new object with the combined data.\n", "It also is not a very efficient method, because it involves creation of a new index *and* data buffer.\n", "Thus, if you plan to do multiple `append` operations, it is generally better to build a list of `DataFrame` objects and pass them all at once to the `concat` function.\n", "\n", "In the next chapter, we'll look at a more powerful approach to combining data from multiple sources: the database-style merges/joins implemented in `pd.merge`.\n", "For more information on `concat`, `append`, and related functionality, see the [\"Merge, Join, Concatenate and Compare\" section](http://pandas.pydata.org/pandas-docs/stable/merging.html) of the Pandas documentation." ] } ], "metadata": { "anaconda-cloud": {}, "jupytext": { "formats": "ipynb,md" }, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.2" } }, "nbformat": 4, "nbformat_minor": 4 }