docs/README.English.md
Note
This README was translated by GPT (implemented by the plugin of this project) and may not be 100% reliable. Please carefully check the translation results.
2023.11.7: When installing dependencies, please select the specified versions in the
requirements.txtfile. Installation command:pip install -r requirements.txt.
If you like this project, please give it a Star.
To translate this project to arbitrary language with GPT, read and run multi_language.py (experimental).
<div align="center">Note
1.Please note that only plugins (buttons) highlighted in bold support reading files, and some plugins are located in the dropdown menu in the plugin area. Additionally, we welcome and process any new plugins with the highest priority through PRs.
2.The functionalities of each file in this project are described in detail in the self-analysis report
self_analysis.md. As the version iterates, you can also click on the relevant function plugin at any time to call GPT to regenerate the project's self-analysis report. Common questions are in thewiki. Regular installation method | One-click installation script | Configuration instructions.3.This project is compatible with and encourages the use of domestic large-scale language models such as ChatGLM. Multiple api-keys can be used together. You can fill in the configuration file with
API_KEY="openai-key1,openai-key2,azure-key3,api2d-key4"to temporarily switchAPI_KEYduring input, enter the temporaryAPI_KEY, and then press enter to apply it.
| Feature (⭐ = Recently Added) | Description |
|---|---|
| ⭐Integrate New Models | Baidu Qianfan and Wenxin Yiyu, Tongyi Qianwen, Shanghai AI-Lab Shusheng, Xunfei Xinghuo, LLaMa2, Zhifu API, DALLE3 |
| Proofreading, Translation, Code Explanation | One-click proofreading, translation, searching for grammar errors in papers, explaining code |
| Custom Shortcuts | Support for custom shortcuts |
| Modular Design | Support for powerful plugins, plugins support hot updates |
| Program Profiling | [Plugin] One-click to profile Python/C/C++/Java/Lua/... project trees or self-profiling |
| Read Papers, Translate Papers | [Plugin] One-click to interpret full-text latex/pdf papers and generate abstracts |
| Full-text Latex Translation, Proofreading | [Plugin] One-click translation or proofreading of latex papers |
| Batch Comment Generation | [Plugin] One-click batch generation of function comments |
| Markdown Translation | [Plugin] Did you see the README in the top five languages? |
| Chat Analysis Report Generation | [Plugin] Automatically generates summary reports after running |
| PDF Paper Full-text Translation | [Plugin] Extract title & abstract of PDF papers + translate full-text (multi-threaded) |
| Arxiv Helper | [Plugin] Enter the arxiv article URL to translate the abstract + download PDF with one click |
| One-click Proofreading of Latex Papers | [Plugin] Syntax and spelling correction of Latex papers similar to Grammarly + output side-by-side PDF |
| Google Scholar Integration Helper | [Plugin] Given any Google Scholar search page URL, let GPT help you write related works |
| Internet Information Aggregation + GPT | [Plugin] One-click to let GPT retrieve information from the Internet to answer questions and keep the information up to date |
| ⭐Arxiv Paper Fine Translation (Docker) | [Plugin] One-click high-quality translation of arxiv papers, the best paper translation tool at present |
| ⭐Real-time Speech Input | [Plugin] Asynchronously listen to audio, automatically segment sentences, and automatically find the best time to answer |
| Formula/Image/Table Display | Can simultaneously display formulas in TeX form and rendered form, support formula and code highlighting |
| ⭐AutoGen Multi-Agent Plugin | [Plugin] Explore the emergence of multi-agent intelligence with Microsoft AutoGen! |
| Start Dark Theme | Add /?__theme=dark to the end of the browser URL to switch to the dark theme |
| More LLM Model Support | It must be great to be served by GPT3.5, GPT4, THU ChatGLM2, and Fudan MOSS at the same time, right? |
| ⭐ChatGLM2 Fine-tuning Model | Support for loading ChatGLM2 fine-tuning models and providing ChatGLM2 fine-tuning assistant plugins |
| More LLM Model Access, support for huggingface deployment | Join NewBing interface (New Bing), introduce Tsinghua JittorLLMs to support LLaMA and Pangu |
| ⭐void-terminal pip package | Use this project's all function plugins directly in Python without GUI (under development) |
| ⭐Void Terminal Plugin | [Plugin] Schedule other plugins of this project directly in natural language |
| More New Feature Demonstrations (Image Generation, etc.)...... | See the end of this document ........ |
config.py to switch between "left-right layout" and "top-bottom layout")functional.py and can be added with custom functions to free up the clipboardgit clone --depth=1 https://github.com/binary-husky/gpt_academic.git
cd gpt_academic
In config.py, configure API KEY and other settings, click here to see special network environment configuration methods. Wiki page。
「 The program will first check if a secret configuration file named config_private.py exists and use the configurations from that file to override the ones in config.py with the same names. If you understand this logic, we strongly recommend that you create a new configuration file named config_private.py next to config.py and move (copy) the configurations from config.py to config_private.py (only copy the configuration items you have modified). 」
「 Project configuration can be done via environment variables. The format of the environment variables can be found in the docker-compose.yml file or our Wiki page. Configuration priority: environment variables > config_private.py > config.py. 」
# (Option I: If you are familiar with python, python>=3.9) Note: Use the official pip source or the Aliyun pip source. Temporary method for switching the source: python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
python -m pip install -r requirements.txt
# (Option II: Using Anaconda) The steps are similar (https://www.bilibili.com/video/BV1rc411W7Dr):
conda create -n gptac_venv python=3.11 # Create the anaconda environment
conda activate gptac_venv # Activate the anaconda environment
python -m pip install -r requirements.txt # This step is the same as the pip installation process
【Optional Step】If you need to support THU ChatGLM2 or Fudan MOSS as backends, you need to install additional dependencies (Prerequisites: Familiar with Python + Familiar with Pytorch + Sufficient computer configuration):
# 【Optional Step I】Support THU ChatGLM2. Note: If you encounter the "Call ChatGLM fail unable to load ChatGLM parameters" error, refer to the following: 1. The default installation above is for torch+cpu version. To use cuda, uninstall torch and reinstall torch+cuda; 2. If the model cannot be loaded due to insufficient local configuration, you can modify the model accuracy in request_llm/bridge_chatglm.py. Change AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True) to AutoTokenizer.from_pretrained("THUDM/chatglm-6b-int4", trust_remote_code=True)
python -m pip install -r request_llms/requirements_chatglm.txt
# 【Optional Step II】Support Fudan MOSS
python -m pip install -r request_llms/requirements_moss.txt
git clone --depth=1 https://github.com/OpenLMLab/MOSS.git request_llms/moss # When executing this line of code, make sure you are in the root directory of the project
# 【Optional Step III】Support RWKV Runner
Refer to wiki: https://github.com/binary-husky/gpt_academic/wiki/%E9%80%82%E9%85%8DRWKV-Runner
# 【Optional Step IV】Make sure that the AVAIL_LLM_MODELS in the config.py configuration file includes the expected models. The currently supported models are as follows (jittorllms series currently only supports the docker solution):
AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "api2d-gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "chatglm", "moss"] # + ["jittorllms_rwkv", "jittorllms_pangualpha", "jittorllms_llama"]
python main.py
# Modify docker-compose.yml, keep scheme 0 and delete other schemes. Then run:
docker-compose up
# Modify docker-compose.yml, keep scheme 1 and delete other schemes. Then run:
docker-compose up
P.S. If you need the latex plugin functionality, please see the Wiki. Also, you can directly use scheme 4 or scheme 0 to get the Latex functionality.
# Modify docker-compose.yml, keep scheme 2 and delete other schemes. Then run:
docker-compose up
Windows one-click running script. Windows users who are completely unfamiliar with the python environment can download the one-click running script from the Release to install the version without local models. The script is contributed by oobabooga.
Use third-party APIs, Azure, Wenxin, Xinghuo, etc., see Wiki page
Pitfall guide for deploying on cloud servers. Please visit Cloud Server Remote Deployment Wiki
Some new deployment platforms or methods
http://localhost/subpath). Please visit FastAPI Run InstructionsOpen core_functional.py with any text editor, add the following entry, and then restart the program. (If the button already exists, both the prefix and suffix can be modified on-the-fly without restarting the program.)
For example:
"Super Translation": {
# Prefix: will be added before your input. For example, used to describe your request, such as translation, code explanation, proofreading, etc.
"Prefix": "Please translate the following paragraph into Chinese and then explain each proprietary term in the text using a markdown table:\n\n",
# Suffix: will be added after your input. For example, used to wrap your input in quotation marks along with the prefix.
"Suffix": "",
},
Write powerful function plugins to perform any task you desire and can't imagine. The difficulty of writing and debugging plugins in this project is very low. As long as you have a certain knowledge of Python, you can implement your own plugin functionality by following the template we provide. For more details, please refer to the Function Plugin Guide.
Save the current conversation in the function plugin area to save the current conversation as a readable and restorable HTML file. Additionally, call Load conversation history archive in the function plugin area (drop-down menu) to restore previous sessions.
Tip: Clicking Load conversation history archive without specifying a file allows you to view the cached historical HTML archive.config.py)GPT Academic Developer QQ Group: 610599535
requirement.txtYou can change the theme by modifying the THEME option (config.py).
Chuanhu-Small-and-Beautiful Websitemaster branch: Main branch, stable versionfrontier branch: Development branch, test versionThe code references the designs of many other excellent projects, in no particular order:
https://github.com/oobabooga/one-click-installers
https://github.com/gradio-app/gradio https://github.com/fghrsh/live2d_demo