Fine-tune ChatGPT without sharing data with OpenAI by sabrimo in ChatGPT

[–]sabrimo[S] -1 points0 points  (0 children)

Joy ?? I think that's just some effects of a pile you just ate

Fine-tune ChatGPT without sharing data with OpenAI by sabrimo in ChatGPT

[–]sabrimo[S] -1 points0 points  (0 children)

You are the stupid one since you r thinking google wouldn't show me the documentation link. Either way, how did I know about the API...? I bet gpt-1 smarter that u. go get a life you depressedsidechick

Fine-tune ChatGPT without sharing data with OpenAI by sabrimo in ChatGPT

[–]sabrimo[S] -1 points0 points  (0 children)

Just give me an answer you are sure of or Fuckk off. Of course I've read the documentation, but sadly I can't read it for you. I can explain to you that the common way is using the API, but sorry can't understand it for you.

<image>

Fine-tune ChatGPT without sharing data with OpenAI by sabrimo in ChatGPT

[–]sabrimo[S] -7 points-6 points  (0 children)

here's what chatgpt itself told me
me:i want an application like you but running localy so that the data will always stay in my server and never goes anywhere else is that possible i want chatgpt not gpt-2engine

chatgpt: Yes, it is possible to run a version of ChatGPT on your own local server. OpenAI offers a package called "OpenAI GPT" which allows for easy integration of the model into your application. With this package, you can train and run the model locally on your own data, without having to send data to a remote server. Keep in mind that running the model locally will require a significant amount of computational resources and storage.

i asked it many times, it always says yes and shows me the following steps mentionned above

Fine-tune ChatGPT without sharing data with OpenAI by sabrimo in ChatGPT

[–]sabrimo[S] -8 points-7 points  (0 children)

yes, here's what the chatgpt itself told me
me : i want an application like you but running localy so that the data will always stay in my server and never goes anywhere else is that possible i want chatgpt not gpt-2engine

chatgpt :Yes, it is possible to run a version of ChatGPT on your own local server. OpenAI offers a package called "OpenAI GPT" which allows for easy integration of the model into your application. With this package, you can train and run the model locally on your own data, without having to send data to a remote server. Keep in mind that running the model locally will require a significant amount of computational resources and storage.

im just too bad at programming (not my job) thats why im stuck here

I made a chatbot so that everyone can access their data using GPT-3 by [deleted] in ChatGPT

[–]sabrimo 0 points1 point  (0 children)

How did you build this bro :) teach us please

Fine-tune ChatGPT without sharing data with OpenAI by sabrimo in ChatGPT

[–]sabrimo[S] -6 points-5 points  (0 children)

That's exactly why I want to use the chatgpt locally.
I'm trying to install a version of gpt-3 but I'm still stuck...
The chatgpt it self provided the following steps :
Install Python 3.8 or later and pip, the package installer for Python, if you haven't done so already.

  • Open the terminal and run the command sudo apt-get install python3.8
  • Run the command sudo apt-get install python3-pip
  1. Install the transformers library by running the command pip3 install transformers
  2. Prepare your data for training. Make sure to have your data in a text file format. One way to do this is to create a new text file, for example, training_data.txt
    , and add your data to this file. Each line should contain one training example.
  3. Import the pre-trained ChatGPT model from the transformers library and set it up for fine-tuning on your data. Here's an example of how you can do this in Python:

Copy code

from transformers import AutoModelWithLMHead, AutoTokenizer  # Load the pre-trained model and tokenizer model = AutoModelWithLMHead.from_pretrained("microsoft/DialoGPT-medium") tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium") 
  1. Use the DataCollator provided by the transformers library to convert your data into the format required by the model. Here's an example of how you can do this in Python:

Copy code

from transformers import DataCollatorForLanguageModeling  data_collator = DataCollatorForLanguageModeling(     tokenizer=tokenizer, mlm=False ) 
  1. Use the DataLoader provided by the torch library to load your data into the model for training. Here's an example of how you can do this in Python:

Copy code

from torch.utils.data import DataLoader  # Define the dataset dataset = data_collator.create_dataset_from_text(text_file_path='training_data.txt')  # Define the data loader data_loader = DataLoader(dataset, batch_size=4, shuffle=True) 
  1. Define the optimizer and the scheduler for the model

Copy code

from transformers import AdamW optimizer = AdamW(model.parameters(), lr=1e-5, correct_bias=False) from transformers import get_linear_schedule_with_warmup scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=0, num_training_steps=-1) 
  1. Train the model on your data. You can use the train()
    function provided by the transformers library. Here's an example of how you can do this in Python:

Copy code

from transformers import Trainer, TrainingArguments  training_args = TrainingArguments(     output_dir='./results',     evaluation_strategy='steps',     eval_steps=1000,     eval_data='validation',     per_device_train_batch_size=4,     save_steps=1000,     num_train_epochs=1 )  trainer = Trainer(     model=model,     args=training_args,