scripts/data_collector/yahoo/README.md
Please pay ATTENTION that the data is collected from Yahoo Finance and the data might not be perfect. We recommend users to prepare their own data if they have high-quality dataset. For more information, users can refer to the related document
NOTE: Yahoo! Finance has blocked the access from China. Please change your network if you want to use the Yahoo data crawler.
Examples of abnormal data
We have considered STOCK PRICE ADJUSTMENT, but some price series seem still very abnormal.
pip install -r requirements.txt
bin file)
qlib-datafrom YahooFinance, is the data that has been dumped and can be used directly inqlib. This ready-made qlib-data is not updated regularly. If users want the latest data, please follow these steps download the latest data.
python scripts/get_data.py qlib_datatarget_dir: save dir, by default ~/.qlib/qlib_data/cn_dataversion: dataset version, value from [v1, v2], by default v1
v2 end date is 2021-06, v1 end date is 2020-09v1, due to the unstable access to historical data by YahooFinance, there are some differences between v2 and v1interval: 1d or 1min, by default 1dregion: cn or us or in, by default cndelete_old: delete existing data from target_dir(features, calendars, instruments, dataset_cache, features_cache), value from [True, False], by default Trueexists_skip: traget_dir data already exists, skip get_data, value from [True, False], by default False# cn 1d
python scripts/get_data.py qlib_data --target_dir ~/.qlib/qlib_data/cn_data --region cn
# cn 1min
python scripts/get_data.py qlib_data --target_dir ~/.qlib/qlib_data/cn_data_1min --region cn --interval 1min
# us 1d
python scripts/get_data.py qlib_data --target_dir ~/.qlib/qlib_data/us_data --region us --interval 1d
collector YahooFinance data and dump into
qlibformat. If the above ready-made data can't meet users' requirements, users can follow this section to crawl the latest data and convert it to qlib-data.
download data to csv: python scripts/data_collector/yahoo/collector.py download_data
This will download the raw data such as high, low, open, close, adjclose price from yahoo to a local directory. One file per symbol.
source_dir: save the directoryinterval: 1d or 1min, by default 1d
due to the limitation of the YahooFinance API, only the last month's data is available in
1min
region: CN or US or IN or BR, by default CNdelay: time.sleep(delay), by default 0.5start: start datetime, by default "2000-01-01"; closed interval(including start)end: end datetime, by default pd.Timestamp(datetime.datetime.now() + pd.Timedelta(days=1)); open interval(excluding end)max_workers: get the number of concurrent symbols, it is not recommended to change this parameter in order to maintain the integrity of the symbol data, by default 1check_data_length: check the number of rows per symbol, by default None
if
len(symbol_df) < check_data_length, it will be re-fetched, with the number of re-fetches coming from themax_collector_countparameter
max_collector_count: number of "failed" symbol retries, by default 2# cn 1d data
python collector.py download_data --source_dir ~/.qlib/stock_data/source/cn_data --start 2020-01-01 --end 2020-12-31 --delay 1 --interval 1d --region CN
# cn 1min data
python collector.py download_data --source_dir ~/.qlib/stock_data/source/cn_data_1min --delay 1 --interval 1min --region CN
# us 1d data
python collector.py download_data --source_dir ~/.qlib/stock_data/source/us_data --start 2020-01-01 --end 2020-12-31 --delay 1 --interval 1d --region US
# us 1min data
python collector.py download_data --source_dir ~/.qlib/stock_data/source/us_data_1min --delay 1 --interval 1min --region US
# in 1d data
python collector.py download_data --source_dir ~/.qlib/stock_data/source/in_data --start 2020-01-01 --end 2020-12-31 --delay 1 --interval 1d --region IN
# in 1min data
python collector.py download_data --source_dir ~/.qlib/stock_data/source/in_data_1min --delay 1 --interval 1min --region IN
# br 1d data
python collector.py download_data --source_dir ~/.qlib/stock_data/source/br_data --start 2003-01-03 --end 2022-03-01 --delay 1 --interval 1d --region BR
# br 1min data
python collector.py download_data --source_dir ~/.qlib/stock_data/source/br_data_1min --delay 1 --interval 1min --region BR
normalize data: python scripts/data_collector/yahoo/collector.py normalize_data
This will:
source_dir: csv directorynormalize_dir: result directorymax_workers: number of concurrent, by default 1interval: 1d or 1min, by default 1d
if
interval == 1min,qlib_data_1d_dircannot beNone
region: CN or US or IN, by default CNdate_field_name: column name identifying time in csv files, by default datesymbol_field_name: column name identifying symbol in csv files, by default symbolend_date: if not None, normalize the last date saved (including end_date); if None, it will ignore this parameter; by default Noneqlib_data_1d_dir: qlib directory(1d data)
if interval==1min, qlib_data_1d_dir cannot be None, normalize 1min needs to use 1d data;
qlib_data_1d can be obtained like this:
$ python scripts/get_data.py qlib_data --target_dir <qlib_data_1d_dir> --interval 1d
$ python scripts/data_collector/yahoo/collector.py update_data_to_bin --qlib_data_1d_dir <qlib_data_1d_dir> --end_date <end_date>
or:
download 1d data from YahooFinance
# normalize 1d cn
python collector.py normalize_data --source_dir ~/.qlib/stock_data/source/cn_data --normalize_dir ~/.qlib/stock_data/source/cn_1d_nor --region CN --interval 1d
# normalize 1min cn
python collector.py normalize_data --qlib_data_1d_dir ~/.qlib/qlib_data/cn_data --source_dir ~/.qlib/stock_data/source/cn_data_1min --normalize_dir ~/.qlib/stock_data/source/cn_1min_nor --region CN --interval 1min
# normalize 1d br
python scripts/data_collector/yahoo/collector.py normalize_data --source_dir ~/.qlib/stock_data/source/br_data --normalize_dir ~/.qlib/stock_data/source/br_1d_nor --region BR --interval 1d
# normalize 1min br
python collector.py normalize_data --qlib_data_1d_dir ~/.qlib/qlib_data/br_data --source_dir ~/.qlib/stock_data/source/br_data_1min --normalize_dir ~/.qlib/stock_data/source/br_1min_nor --region BR --interval 1min
dump data: python scripts/dump_bin.py dump_all
This will convert the normalized csv in feature directory as numpy array and store the normalized data one file per column and one symbol per directory.
data_path: stock data path or directory, normalize result(normalize_dir)qlib_dir: qlib(dump) data directorfreq: transaction frequency, by default day
freq_map = {1d:day, 1mih: 1min}
max_workers: number of threads, by default 16include_fields: dump fields, by default ""exclude_fields: fields not dumped, by default `"""
dump_fields =
include_fields if include_fields else set(symbol_df.columns) - set(exclude_fields) exclude_fields else symbol_df.columns
symbol_field_name: column name identifying symbol in csv files, by default symboldate_field_name: column name identifying time in csv files, by default datefile_suffix: stock data file format, by default ".csv"# dump 1d cn
python dump_bin.py dump_all --data_path ~/.qlib/stock_data/source/cn_1d_nor --qlib_dir ~/.qlib/qlib_data/cn_data --freq day --exclude_fields date,symbol --file_suffix .csv
# dump 1min cn
python dump_bin.py dump_all --data_path ~/.qlib/stock_data/source/cn_1min_nor --qlib_dir ~/.qlib/qlib_data/cn_data_1min --freq 1min --exclude_fields date,symbol --file_suffix .csv
It is recommended that users update the data manually once (--trading_date 2021-05-25) and then set it to update automatically.
NOTE: Users can't incrementally update data based on the offline data provided by Qlib(some fields are removed to reduce the data size). Users should use yahoo collector to download Yahoo data from scratch and then incrementally update it.
Automatic update of data to the "qlib" directory each trading day(Linux)
use crontab: crontab -e
set up timed tasks:
* * * * 1-5 python <script path> update_data_to_bin --qlib_data_1d_dir <user data dir>
Manual update of data
python scripts/data_collector/yahoo/collector.py update_data_to_bin --qlib_data_1d_dir <user data dir> --end_date <end date>
end_date: end of trading day(not included)check_data_length: check the number of rows per symbol, by default None
if
len(symbol_df) < check_data_length, it will be re-fetched, with the number of re-fetches coming from themax_collector_countparameter
scripts/data_collector/yahoo/collector.py update_data_to_bin parameters:
source_dir: The directory where the raw data collected from the Internet is saved, default "Path(file).parent/source"normalize_dir: Directory for normalize data, default "Path(file).parent/normalize"qlib_data_1d_dir: the qlib data to be updated for yahoo, usually from: download qlib dataend_date: end datetime, default pd.Timestamp(trading_date + pd.Timedelta(days=1)); open interval(excluding end)region: region, value from ["CN", "US"], default "CN"interval: interval, default "1d"(Currently only supports 1d data)exists_skip: exists skip, by default Falseimport qlib
from qlib.data import D
# 1d data cn
# freq=day, freq default day
qlib.init(provider_uri="~/.qlib/qlib_data/cn_data", region="cn")
df = D.features(D.instruments("all"), ["$close"], freq="day")
# 1min data cn
# freq=1min
qlib.init(provider_uri="~/.qlib/qlib_data/cn_data_1min", region="cn")
inst = D.list_instruments(D.instruments("all"), freq="1min", as_list=True)
# get 100 symbols
df = D.features(inst[:100], ["$close"], freq="1min")
# get all symbol data
# df = D.features(D.instruments("all"), ["$close"], freq="1min")
# 1d data us
qlib.init(provider_uri="~/.qlib/qlib_data/us_data", region="us")
df = D.features(D.instruments("all"), ["$close"], freq="day")
# 1min data us
qlib.init(provider_uri="~/.qlib/qlib_data/us_data_1min", region="cn")
inst = D.list_instruments(D.instruments("all"), freq="1min", as_list=True)
# get 100 symbols
df = D.features(inst[:100], ["$close"], freq="1min")
# get all symbol data
# df = D.features(D.instruments("all"), ["$close"], freq="1min")