8wDlpd.png
8wDFp9.png
8wDEOx.png
8wDMfH.png
8wDKte.png

keras BackupAndRestore 恢复模型但准确率较低且损失较高

Warlam 2月前

43 0

我正在使用 google colab gpu,我需要尽可能多地保存模型,然后在新的 google colab 中重新启动训练模型,下面是我的代码,但准确度较低,损失较高,例如在 e...

我正在使用 google colab gpu,我需要尽可能多地保存模型,然后在新的 google colab 中重新启动训练模型,下面是我的代码,但准确率较低,损失较高,例如在 epoch 89 中我有

acc: 0.9990301728      loss:0.002603143221 
acc_val: 0.9557291865  loss_val:0.2962754667 

在 epoch 90 中第一次在新 google colab 中运行时,我有:

acc: 0.9803879261    loss:01103143221 
acc_val: 0.939127624 loss_val:0.1836656481
# Define 5-fold cross-validation
kf = KFold(n_splits=5, shuffle=True, random_state=42)

conf_matrix_all_CNN_10s = []
fpr_matrix_all_CNN_10s = []
tpr_matrix_all_CNN_10s = []

for i, (train_val_index, test_index) in enumerate(kf.split(all_file_paths)):
    print(f"Fold {i+1}:")

    # Split data into train and val sets for this fold
    train_val_files = [all_file_paths[j] for j in train_val_index]
    test_files = [all_file_paths[j] for j in test_index]

    train_files, val_files = train_test_split(train_val_files, test_size=0.25, random_state=42)

    # Create train and test data generators
    train_generator = data_generator_train(train_files, batch_size)
    val_generator = data_generator_val(val_files, batch_size)
    test_generator = data_generator_test(test_files, batch_size)

    # Define model architecture
    model = Sequential()
    input_shape = np.load(epilepsy_file_paths[0]).T.shape

    model.add(Conv1D(64, 3, strides=1, input_shape=input_shape, padding='same', activation='relu'))
    model.add(BatchNormalization())
    model.add(MaxPool1D(pool_size=(2)))
    model.add(Dropout(0.5))

    model.add(Conv1D(48, 3, strides=1, padding='same', activation='relu'))
    model.add(BatchNormalization())
    model.add(MaxPool1D(pool_size=(2)))
    model.add(Dropout(0.5))

    model.add(Conv1D(32, 3, strides=1, padding='same', activation='relu'))
    model.add(BatchNormalization())
    model.add(MaxPool1D(pool_size=(2)))
    model.add(Dropout(0.5))

    model.add(Flatten())

    model.add(Dense(256, activation='relu'))
    model.add(Dropout(0.2))
    model.add(Dense(128, activation='relu'))
    model.add(Dropout(0.2))
    model.add(Dense(1, activation='sigmoid'))

    modelOptimizer = Adam(learning_rate=0.001)
    model.compile(optimizer=modelOptimizer, loss=BinaryCrossentropy(), metrics=['accuracy'])

    reduceLR_callback = ReduceLROnPlateau(
        monitor="val_loss",
        factor=0.5,
        patience=7,
        mode="min",
        min_lr=1e-5,
    )

    checkpoint_dir = '/content/drive/MyDrive/Project1/Weights/10s'

    # Define CSV logger callback
    csv_logger = CSVLogger(f'{checkpoint_dir}/training_log_fold_{i + 1}_CNN_10s.csv', append=True)



    backup_restore_callback = BackupAndRestore(backup_dir=f'{checkpoint_dir}/backup_fold_{i + 1}')

    # Train the model with training data
    history = model.fit(
        train_generator,
        epochs=epochs,
        steps_per_epoch=len(train_files) // batch_size,
        validation_data=val_generator,
        validation_steps=len(val_files) // batch_size,
        callbacks=[reduceLR_callback, csv_logger, backup_restore_callback],
        shuffle=False
    )

我预计重新加载模型后准确率和损失率接近

帖子版权声明 1、本帖标题:keras BackupAndRestore 恢复模型但准确率较低且损失较高
    本站网址:http://xjnalaquan.com/
2、本网站的资源部分来源于网络,如有侵权,请联系站长进行删除处理。
3、会员发帖仅代表会员个人观点,并不代表本站赞同其观点和对其真实性负责。
4、本站一律禁止以任何方式发布或转载任何违法的相关信息,访客发现请向站长举报
5、站长邮箱:yeweds@126.com 除非注明,本帖由Warlam在本站《tensorflow》版块原创发布, 转载请注明出处!
最新回复 (0)
  • 这是用于酒店评论(例如正面或负面)的情感分析代码示例。我使用 pandas、transformers、datasets、turkish_lm_tuner 库。首先,我认为路径名称(C:\Users\Ata Onur

    这是用于酒店评论(例如正面或负面)的情感分析代码示例。我使用了 pandas、transformers、datasets 和 turkish_lm_tuner 库。首先,我认为路径名称(C:\Users\Ata Onur Özdemir,所以是“Ö”字母)和我已更改,但它没有修复,输出给出相同的错误。其次,我在输出文件夹中创建了 init .py,但这种方式给出了相同的错误。而且我在环境变量中定义了路径,但相同的错误再次出现。我搜索了 google 和 ,但我找不到任何方法,我尝试了很多方法,但我没有找到。请帮助我 :) 我搜索了 google 和 ,但我找不到任何方法,我尝试了很多方法,但我没有找到。请帮助我 :)

    import os
    import pandas as pd
    from transformers import AutoTokenizer
    from datasets import Dataset
    from turkish_lm_tuner import TrainerForClassification, EvaluatorForClassification
    
    # Load the data
    data = pd.read_csv('../Emotion_Detection/Hotel_readablee.csv')
    
    # Define the output directory
    output_dir = 'C:\\Users\\Ata Onur Özdemir\\PycharmProjects\\Emotion_Detection\\output'
    
    # Check if the output directory exists
    if os.path.exists(output_dir):
        # Check the contents of the directory
        print(f"Contents of {output_dir} directory:")
        print(os.listdir(output_dir))
        
        # Rename the directory
        new_output_dir = output_dir + "_old"
        os.rename(output_dir, new_output_dir)
        print(f"{output_dir} directory has been renamed to {new_output_dir}.")
        
        # Create a new output directory
        os.makedirs(output_dir)
        print(f"New {output_dir} directory created.")
    else:
        # If the directory does not exist, create it
        os.makedirs(output_dir)
        print(f"{output_dir} directory created.")
    
    # Initialize the tokenizer
    model_name = "boun-tabi-LMG/TURNA"
    tokenizer = AutoTokenizer.from_pretrained(model_name)
    
    # Define the CustomDatasetProcessor class and other necessary steps
    class CustomDatasetProcessor:
        def __init__(self, tokenizer, max_input_length):
            self.tokenizer = tokenizer
            self.max_input_length = max_input_length
    
        def load_and_preprocess_data(self, data):
            dataset = Dataset.from_pandas(data)
    
            def preprocess_function(examples):
                # Convert each review text to string type
                positive_reviews = [str(review) for review in examples['Positive_Review_Tr']]
                negative_reviews = [str(review) for review in examples['Negative_Review_Tr']]
    
                # Use the tokenizer correctly
                tokenized_reviews = self.tokenizer(
                    positive_reviews,
                    negative_reviews,
                    truncation=True,
                    padding='max_length',
                    max_length=self.max_input_length,
                    return_tensors='pt'  # Return PyTorch tensors
                )
    
                return tokenized_reviews
    
            tokenized_dataset = dataset.map(preprocess_function, batched=True)
            return tokenized_dataset
    
    # Initialize the dataset processor
    dataset_processor = CustomDatasetProcessor(tokenizer, max_input_length=2048)
    
    # Split the data into training, validation, and test sets
    train_data = data.sample(frac=0.8, random_state=42)
    remaining_data = data.drop(train_data.index)
    validation_data = remaining_data.sample(frac=0.5, random_state=42)
    test_data = remaining_data.drop(validation_data.index)
    
    # Preprocess the datasets
    train_dataset = dataset_processor.load_and_preprocess_data(train_data)
    eval_dataset = dataset_processor.load_and_preprocess_data(validation_data)
    test_dataset = dataset_processor.load_and_preprocess_data(test_data)
    
    # Training parameters
    training_params = {
        'num_train_epochs': 10,
        'per_device_train_batch_size': 4,
        'per_device_eval_batch_size': 4,
        'output_dir': output_dir,
        'evaluation_strategy': 'epoch',
        'save_strategy': 'epoch',
    }
    
    # Optimizer parameters
    optimizer_params = {
        'optimizer_type': 'adafactor',
        'scheduler': False,
    }
    
    # Test parameters
    test_params = {
        'per_device_eval_batch_size': 4,
        'output_dir': output_dir,
    }
    
    num_labels = 4  # Assuming you are performing binary classification
    
    # Initialize TrainerForClassification
    model_trainer = TrainerForClassification(
        model_name=model_name,
        num_labels=num_labels,
        task='classification',
        optimizer_params=optimizer_params,
        training_params=training_params,
        model_save_path="hotel_reviews_classification_model",
        test_params=test_params
    )
    
    # Train and evaluate the model
    trainer, model = model_trainer.train_and_evaluate(train_dataset, eval_dataset, test_dataset)
    
    # Save the trained model and tokenizer
    model.save_pretrained("hotel_reviews_classification_model")
    tokenizer.save_pretrained("hotel_reviews_classification_model")
    
    # Evaluate the model using EvaluatorForClassification
    evaluator = EvaluatorForClassification(
        model_save_path="hotel_reviews_classification_model",
        model_name=model_name,
        task='classification',
        test_params=test_params,
        num_labels=num_labels
    )
    
    # Evaluate the model on the test dataset
    results = evaluator.evaluate_model(test_dataset)
    
    # Convert the results to a DataFrame
    results_df = pd.DataFrame(results)
    
    # Save the results to a new CSV file
    results_df.to_csv('evaluation_results.csv', index=False)
    print("Evaluation results saved to evaluation_results.csv.")
    
    # Check the current working directory
    print("Current Working Directory:", os.getcwd())
    
    
    

    系统出现此错误:

    Traceback (most recent call last):
      File "C:\Users\Ata Onur Özdemir\PycharmProjects\Emotion_Detection\main.py", line 101, in <module>
        trainer, model = model_trainer.train_and_evaluate(train_dataset, eval_dataset, test_dataset)
      File "C:\Users\Ata Onur Özdemir\venv\lib\site-packages\turkish_lm_tuner\trainer.py", line 195, in train_and_evaluate
        trainer.train()
      File "C:\Users\Ata Onur Özdemir\venv\lib\site-packages\transformers\trainer.py", line 1885, in train
        return inner_training_loop(
      File "C:\Users\Ata Onur Özdemir\venv\lib\site-packages\transformers\trainer.py", line 2147, in _inner_training_loop
        self.control = self.callback_handler.on_train_begin(args, self.state, self.control)
      File "C:\Users\Ata Onur Özdemir\venv\lib\site-packages\transformers\trainer_callback.py", line 454, in on_train_begin
        return self.call_event("on_train_begin", args, state, control)
      File "C:\Users\Ata Onur Özdemir\venv\lib\site-packages\transformers\trainer_callback.py", line 498, in call_event
        result = getattr(callback, event)(
      File "C:\Users\Ata Onur Özdemir\venv\lib\site-packages\transformers\integrations\integration_utils.py", line 629, in on_train_begin
        self._init_summary_writer(args, log_dir)
      File "C:\Users\Ata Onur Özdemir\venv\lib\site-packages\transformers\integrations\integration_utils.py", line 615, in _init_summary_writer
        self.tb_writer = self._SummaryWriter(log_dir=log_dir)
      File "C:\Users\Ata Onur Özdemir\venv\lib\site-packages\torch\utils\tensorboard\writer.py", line 249, in __init__
        self._get_file_writer()
      File "C:\Users\Ata Onur Özdemir\venv\lib\site-packages\torch\utils\tensorboard\writer.py", line 281, in _get_file_writer
        self.file_writer = FileWriter(
      File "C:\Users\Ata Onur Özdemir\venv\lib\site-packages\torch\utils\tensorboard\writer.py", line 75, in __init__
        self.event_writer = EventFileWriter(
      File "C:\Users\Ata Onur Özdemir\venv\lib\site-packages\tensorboard\summary\writer\event_file_writer.py", line 72, in __init__
        tf.io.gfile.makedirs(logdir)
      File "C:\Users\Ata Onur Özdemir\venv\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 513, in recursive_create_dir_v2
        _pywrap_file_io.RecursivelyCreateDir(compat.path_to_bytes(path))
    tensorflow.python.framework.errors_impl.FailedPreconditionError: C:\Users\Ata Onur Özdemir\PycharmProjects\Emotion_Detection\output is not a directory
    

    D:\TEMP>tree PycharmProjects文件夹路径列表卷序列号为 A544-D3FBD:\TEMP\PYCHARMPROJECTS路径无效 - \TEMP\PYCHARMPROJECTS不存在子文件夹

  • 以下是我测试过的内容:function testRounding() { var order = { size1:0.07529999999999999,size2:0.07529999999999999 }; order.size1 = order.size1.roundDecimal(4); order.size2.roundDecimal(...

    以下是我测试过的内容:

    function testRounding() {
      var order = { size1: 0.07529999999999999, size2: 0.07529999999999999 };
      order.size1 = order.size1.roundDecimal(4);
      order.size2.roundDecimal(4);
      Logger.log(order.size1); //outputs 0.0753
      Logger.log(order.size2); //outputs 0.07529999999999999 but desired output is 0.0753
    }
    
    Number.prototype.roundDecimal = function (place) {
      var mult = 1;
      for (var i = 1; i < place; i++) {
        mult = mult * 10;
      }
      return Math.round(this * 10 * mult) / (10 * mult);
    }
    

    我从记录器输出中可以看到该方法有效,但不会更改它所应用的对象,除非像我对 size1 所做的那样故意覆盖它。我宁愿不必这样做,而是让我对 size2 使用的语法实际上修改 size2 的值。

    我怎么做?

  • 我正在尝试运行使用 llama 的 FinGPT_forecaster 的示例代码。有关使用 llama 的此财务 GPT 的示例代码和信息可在此处找到:https://github.com/AI4Finance-Found...

    我正在尝试运行使用 llama 的 FinGPT_forecaster 的示例代码。有关使用 llama 的此财务 GPT 的示例代码和信息可在此处找到: https://github.com/AI4Finance-Foundation/FinGPT/tree/master/fingpt/FinGPT_Forecaster

    我在 Meta 注册并创建了一个 Huggingface 账户。我在 Huggingface 上生成了代币,并将其嵌入到下面的代码中。

    我稍微修改了代码,使其包含我的令牌,因此代码如下所示(为了保护隐私,此处将令牌指定为 xxx)

    from datasets import load_dataset
    from transformers import AutoTokenizer, AutoModelForCausalLM
    from peft import PeftModel
    import torch
    
    
    base_model = AutoModelForCausalLM.from_pretrained(
        'meta-llama/Llama-2-7b-chat-hf',
        trust_remote_code=True,
        device_map="auto",
        torch_dtype=torch.float16,
        token='xxx' # optional if you have enough VRAM
    )
    
    tokenizer = AutoTokenizer.from_pretrained('meta-llama/Llama-2-7b-chat-hf',token='xxx')
    print("hi")
    model = PeftModel.from_pretrained(base_model, 'FinGPT/fingpt-forecaster_dow30_llama2-7b_lora',token='xxx')
    print("hi2")
    model = model.eval()
    

    运行时,线路出现停顿, model = PeftModel... 我多次收到以下消息:

    Loading checkpoint shards:   0%|          | 0/2 [00:00<?, ?it/s]
    Some parameters are on the meta device device because they were offloaded to the cpu and disk.
    hi
    
    C:\Users\xxx\AppData\Roaming\Python\Python311\site-packages\torch\nn\modules\module.py:2047: UserWarning: for base_model.model.model.layers.19.self_attn.q_proj.lora_A.default.weight: copying from a non-meta parameter in the checkpoint to a meta parameter in the current model, which is a no-op. (Did you mean to pass `assign=True` to assign items in the state dictionary to their corresponding key in the module instead of copying them in place?)
      warnings.warn(f'for {key}: copying from a non-meta parameter in the checkpoint to a meta '
    

    然后我收到消息:

    Traceback (most recent call last):
    
      File c:\ProgramData\Anaconda3\Lib\site-packages\spyder_kernels\py3compat.py:356 in compat_exec
        exec(code, globals, locals)
    
      File c:\users\xxx\downloads\fingpt20240715.py:23
        model = PeftModel.from_pretrained(base_model, 'FinGPT/fingpt-forecaster_dow30_llama2-7b_lora',token='xxx')
    
      File ~\AppData\Roaming\Python\Python311\site-packages\peft\peft_model.py:430 in from_pretrained
        model.load_adapter(model_id, adapter_name, is_trainable=is_trainable, **kwargs)
    
      File ~\AppData\Roaming\Python\Python311\site-packages\peft\peft_model.py:1022 in load_adapter
        self._update_offload(offload_index, adapters_weights)
    
      File ~\AppData\Roaming\Python\Python311\site-packages\peft\peft_model.py:908 in _update_offload
        safe_module = dict(self.named_modules())[extended_prefix]
    
    KeyError: 'base_model.model.model.model.embed_tokens'
    
    

    有什么建议吗?提前谢谢。

  • 数字是不可变的(谢天谢地),所以如果不创建其他类型,你就无法做到这一点。另外,不要扩展内置原型——这是一种不好的做法,可能会与语言的未来扩展相冲突,并导致整个程序出现怪异现象,比如全局可变状态,甚至更糟。最好的办法是编写一个独立函数

  • Ira 2月前 0 只看Ta
    引用 6

    @Luuk 我控制了所有路径,它们都是有效目录。代码是创建文件夹,但它不读取自己的文件夹,我无法理解这个错误的含义,我已经搜索解决方案 3-4 天了。

  • @Luuk 是的,但我需要解决方案,因为这是我的实习项目,我必须解决这个问题,这就是我无法关闭这个问题的原因。

  • 我创建了 该问题的 最小可重现示例

    import os
    
    output_dir = 'D:\\TEMP\\PhycharmProjects\\Emotion_Detection\\output'
    
    if not os.path.exists(output_dir):
        os.makedirs(output_dir)
        print(f"{output_dir} created!")
    elif not os.path.isdir(output_dir):
        print(f"{output_dir} NOT created!")
        raise NotADirectoryError(f"{output_dir} is not a directory")
    

    运行此代码将产生以下输出:

    D:\TEMP>python testdir.py
    D:\TEMP\PhycharmProjects\Emotion_Detection\output created!
    

    我甚至仔细检查了一遍(虽然没有必要,但是……)

    D:\TEMP>tree PhycharmProjects
    Folder PATH listing for volume HDD
    Volume serial number is D46B-804B
    D:\TEMP\PHYCHARMPROJECTS
    Emotion_Detection
        output
    
返回
作者最近主题: