docs/integrations/sources/microsoft-sharepoint.md
This page contains the setup guide and reference information for the Microsoft SharePoint source connector.
</HideInUI>{} and will automatically infer the schema from the file(s) you are replicating. For details on providing a custom schema, refer to the User Schema section.** as the pattern. For more precise pattern matching options, refer to the Path Patterns section below.The Microsoft Graph API uses OAuth for authentication. Microsoft Graph exposes granular permissions that control the access that apps have to resources, like users, groups, and mail. When a user signs in to your app, they or in some cases an administrator are given a chance to consent to these permissions. If the user consents, your app is given access to the resources and APIs that it has requested. For apps that don't take a signed-in user, permissions can be pre-consented to by an administrator when the app is installed.
Microsoft Graph has two types of permissions:
This source requires Application permissions. Follow these instructions for creating an app in the Azure portal. This process will produce the client_id, client_secret, and tenant_id needed for the tap configuration file.
{} and will automatically infer the schema from the file(s) you are replicating. For details on providing a custom schema, refer to the User Schema section.** as the pattern. For more precise pattern matching options, refer to the Path Patterns section below.(tl;dr -> path pattern syntax using wcmatch.glob. GLOBSTAR and SPLIT flags are enabled.)
This connector can sync multiple files by using glob-style patterns, rather than requiring a specific path for every file. This enables:
** would indicate every file in the folder.You must provide a path pattern. You can also provide many patterns split with | for more complex directory layouts.
Each path pattern is a reference from the root of the folder, so don't include the root folder name itself in the pattern(s).
Some example patterns:
** : match everything.**/*.csv : match all files with specific extension.myFolder/**/*.csv : match all csv files anywhere under myFolder.*/** : match everything at least one folder deep.*/*/*/** : match everything at least three folders deep.**/file.*|**/file : match every file called "file" with any extension (or no extension).x/*/y/* : match all files that sit in sub-folder x -> any folder -> folder y.**/prefix*.csv : match all csv files with specific prefix.**/prefix*.parquet : match all parquet files with specific prefix.Let's look at a specific example, matching the following folder layout (MyFolder is the folder specified in the connector config as the root folder, which the patterns are relative to):
MyFolder
-> log_files
-> some_table_files
-> part1.csv
-> part2.csv
-> images
-> more_table_files
-> part3.csv
-> extras
-> misc
-> another_part1.csv
We want to pick up part1.csv, part2.csv and part3.csv (excluding another_part1.csv for now). We could do this a few different ways:
**/part*.csv.some_table_files/*.csv|more_table_files/*.csv to pick up relevant files only from those exact folders.*table_files/*.csv. This could however cause problems in the future if new unexpected folders started being created.extras/**/*.csv would pick up any csv files nested in folders below "extras", such as "extras/misc/another_part1.csv".As you can probably tell, there are many ways to achieve the same goal with path patterns. We recommend using a pattern that ensures clarity and is robust against future additions to the directory structure.
When using the Avro, Jsonl, CSV or Parquet format, you can provide a schema to use for the output stream. Note that this doesn't apply to the experimental Document file type format.
Providing a schema allows for more control over the output of this stream. Without a provided schema, columns and datatypes will be inferred from the first created file in the bucket matching your path pattern and suffix. This will probably be fine in most cases but there may be situations you want to enforce a schema instead, e.g.:
_ab_additional_properties map._ab_additional_properties map.Or any other reason! The schema must be provided as valid JSON as a map of {"column": "datatype"} where each datatype is one of:
For example:
{"id": "integer", "location": "string", "longitude": "number", "latitude": "number"}{"username": "string", "friends": "array", "information": "object"}Since CSV files are effectively plain text, providing specific reader options is often required for correct parsing of the files. These settings are applied when a CSV is created or exported so please ensure that this process happens consistently over time.
User Provided assumes the CSV does not have a header row and uses the headers provided and Autogenerated assumes the CSV does not have a header row and the CDK will generate headers using for f{i} where i is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can set a value for the "Skip rows before header" option to ignore the header row.\t. By default, this value is set to ,.utf8.\). For example, given the following data:Product,Description,Price
Jeans,"Navy Blue, Bootcut, 34\"",49.99
The backslash (\) is used directly before the second double quote (") to indicate that it is not the closing quote for the field, but rather a literal double quote character that should be included in the value (in this example, denoting the size of the jeans in inches: 34" ).
Leaving this field blank (default option) will disallow escaping.
".Apache Parquet is a column-oriented data storage format of the Apache Hadoop ecosystem. It provides efficient data compression and encoding schemes with enhanced performance to handle complex data in bulk. At the moment, partitioned parquet datasets are unsupported. The following settings are available:
The Avro parser uses the Fastavro library. The following settings are available:
There are currently no options for JSONL parsing.
:::warning The Document file type format is currently an experimental feature and not subject to SLAs. Use at your own risk. :::
The Document file type format is a special format that allows you to extract text from Markdown, TXT, PDF, Word, Powerpoint and Google documents. If selected, the connector will extract text from the documents and output it as a single field named content. The document_key field will hold a unique identifier for the processed file which can be used as a primary key. The content of the document will contain markdown formatting converted from the original file format. Each file matching the defined glob pattern needs to either be a markdown (md), PDF (pdf) or Docx (docx) file.
One record will be emitted for each document. Keep in mind that large files can emit large records that might not fit into every destination as each destination has different limitations for string fields.
Before parsing each document, the connector exports Google Document files to Docx format internally. Google Sheets, Google Slides, and drawings are internally exported and parsed by the connector as PDFs.
<HideInUI>:::info
The raw file replication feature has the following requirements and limitations:
v1.2.0 or later1GB per filev1.4.0 or later:::
Copy raw files without parsing their contents. Bits are copied into the destination exactly as they appeared in the source. Recommended for use with unstructured text data, non-text and compressed files.
Format options will not be taken into account. Instead, files will be transferred to the file-based destination without parsing underlying data.
</FieldAnchor>If enabled, sends subdirectory folder structure along with source file names to the destination. Otherwise, files will be synced by their names only. This option is ignored when file-based replication is not enabled.
By providing a url to the site URL field, the connector will be able to access the files in the specific sharepoint site.
The site url should be in the format https://<tenant_name>.sharepoint.com/sites/<site>. If no field is provided, the connector will access the files in the main site.
To have the connector iterate all sub-sites provide the site url as https://<tenant_name>.sharepoint.com/sites/.
The Microsoft SharePoint source connector supports the following sync modes:
| Feature | Supported?(Yes/No) |
|---|---|
| Full Refresh Sync | Yes |
| Incremental Sync | Yes |
There is no predefined streams. The streams are based on content of files were added on the Set up page.
The connector is restricted by normal Microsoft Graph requests limitation.
| Integration Type | Airbyte Type |
|---|---|
string | string |
number | number |
array | array |
object | object |
| Version | Date | Pull Request | Subject |
|---|---|---|---|
| 0.10.4 | 2025-08-21 | 65118 | Decertify connector |
| 0.10.3 | 2025-07-13 | 60562 | Update dependencies |
| 0.10.2 | 2025-05-10 | 59113 | Update dependencies |
| 0.10.1 | 2025-05-07 | 59711 | Fix edege case for unexcpeted uris. |
| 0.10.0 | 2025-05-07 | 59700 | Promoting release candidate 0.10.0-rc.1 to a main version. |
| 0.10.0-rc.1 | 2025-05-05 | 57507 | Adapt file-transfer records to latest protocol, requires platform >= 1.7.0, destination-s3 >= 1.8.0 |
| 0.9.3 | 2025-04-19 | 58471 | Update dependencies |
| 0.9.2 | 2025-04-12 | 57920 | Update dependencies |
| 0.9.1 | 2025-04-05 | 57065 | Update dependencies |
| 0.9.0 | 2025-04-01 | 55912 | Provide ability to iterate all sharepoint sites |
| 0.8.2 | 2025-03-29 | 56712 | Update dependencies |
| 0.8.1 | 2025-03-22 | 56014 | Update dependencies |
| 0.8.0 | 2025-03-12 | 54658 | Provide ability to sync other sites than Main sharepoint site |
| 0.7.2 | 2025-03-08 | 55427 | Update dependencies |
| 0.7.1 | 2025-03-01 | 54749 | Update dependencies |
| 0.7.0 | 2025-02-27 | 54200 | Add advanced Oauth |
| 0.6.1 | 2025-02-22 | 45062 | Update dependencies |
| 0.6.0 | 2025-02-20 | 54140 | Implement file transfer mode to move raw files |
| 0.5.2 | 2024-08-24 | 45646 | Fix: handle wrong folder name |
| 0.5.1 | 2024-08-24 | 44660 | Update dependencies |
| 0.5.0 | 2024-08-19 | 42983 | Migrate to CDK v4.5.1 |
| 0.4.5 | 2024-08-19 | 44382 | Update dependencies |
| 0.4.4 | 2024-08-12 | 43743 | Update dependencies |
| 0.4.3 | 2024-08-10 | 43565 | Update dependencies |
| 0.4.2 | 2024-08-03 | 43235 | Update dependencies |
| 0.4.1 | 2024-07-27 | 42704 | Update dependencies |
| 0.4.0 | 2024-07-25 | 42008 | Migrate to CDK v3.5.3 |
| 0.3.1 | 2024-07-20 | 42143 | Update dependencies |
| 0.3.0 | 2024-07-16 | 42007 | Migrate to CDK v2.4.0 |
| 0.2.11 | 2024-07-13 | 41688 | Update dependencies |
| 0.2.10 | 2024-07-10 | 41589 | Update dependencies |
| 0.2.9 | 2024-07-06 | 40917 | Update dependencies |
| 0.2.8 | 2024-06-26 | 40539 | Update dependencies |
| 0.2.7 | 2024-06-25 | 40357 | Update dependencies |
| 0.2.6 | 2024-06-24 | 40233 | Update dependencies |
| 0.2.5 | 2024-06-22 | 39987 | Update dependencies |
| 0.2.4 | 2024-05-29 | 38675 | Avoid error on empty stream when running discover |
| 0.2.3 | 2024-04-17 | 37372 | Make refresh token optional |
| 0.2.2 | 2024-03-28 | 36573 | Update QL to 400 |
| 0.2.1 | 2024-03-22 | 36381 | Unpin CDK |
| 0.2.0 | 2024-03-06 | 35830 | Add fetching shared items |
| 0.1.0 | 2024-01-25 | 33537 | New source |