8wDlpd.png
8wDFp9.png
8wDEOx.png
8wDMfH.png
8wDKte.png

kivy 'configure:错误:C 编译器无法创建可执行文件解决'查看 `config.log' 了解更多详细信息' 错误

Jérôme Richard 2月前

39 0

https://github.com/hyun071111/kivyapp我正在使用 kivy 制作一个 Android 应用程序,但是我一直收到错误...有人可以帮助我吗?回溯(最近一次调用最后一次):文件 \'/usr/lib/python3.10/runpy...

https://github.com/hyun071111/kivyapp 我正在用 kivy 制作一个 Android 应用程序,但我一直收到错误...有人能帮帮我吗?

Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in \_run_module_as_main
return \_run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in \_run_code
exec(code, run_globals)
File "/mnt/d/hackerton/app/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 1256, in \<module\>
main()
File "/mnt/d/hackerton/app/.buildozer/android/platform/python-for-android/pythonforandroid/entrypoints.py", line 18, in main
ToolchainCL()
File "/mnt/d/hackerton/app/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 685, in __init__
getattr(self, command)(args)
File "/mnt/d/hackerton/app/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 104, in wrapper_func
build_dist_from_args(ctx, dist, args)
File "/mnt/d/hackerton/app/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 163, in build_dist_from_args
build_recipes(build_order, python_modules, ctx,
File "/mnt/d/hackerton/app/.buildozer/android/platform/python-for-android/pythonforandroid/build.py", line 504, in build_recipes
recipe.build_arch(arch)
File "/mnt/d/hackerton/app/.buildozer/android/platform/python-for-android/pythonforandroid/recipes/libffi/__init__.py", line 30, in build_arch  
shprint(sh.Command('./configure'),
File "/mnt/d/hackerton/app/.buildozer/android/platform/python-for-android/pythonforandroid/logger.py", line 167, in shprint
for line in output:
File "/home/qkrgusdnr/.local/lib/python3.10/site-packages/sh.py", line 915, in next
self.wait()
File "/home/qkrgusdnr/.local/lib/python3.10/site-packages/sh.py", line 845, in wait
self.handle_command_exit_code(exit_code)
File "/home/qkrgusdnr/.local/lib/python3.10/site-packages/sh.py", line 869, in handle_command_exit_code
raise exc
sh.ErrorReturnCode_77:

RAN: /mnt/d/hackerton/app/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/other_builds/libffi/arm64-v8a__ndk_target_21/libffi/configure --host=aarch64-linux-android --prefix=/mnt/d/hackerton/app/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/other_builds/libffi/arm64-v8a__ndk_target_21/libffi --disable-builddir --enable-shared

STDOUT:
checking build system type... x86_64-pc-linux-gnu
checking host system type... aarch64-unknown-linux-android
checking target system type... aarch64-unknown-linux-android
checking for gsed... sed
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for aarch64-linux-android-strip... /home/qkrgusdnr/.buildozer/android/platform/android-ndk-r25b/toolchains/llvm/prebuilt/linux-x86_64/bin/llvm-strip --strip-unneeded
checking for a race-free mkdir -p... /usr/bin/mkdir -p
checking for gawk... gawk
checking whether make -j12 sets $(MAKE)... yes
checking whether make -j12 supports nested variables... yes
checking for aarch64-linux-android-gcc... /usr/bin/ccache /home/qkrgusdnr/.buildozer/android/platform/android-ndk-r25b/toolchains/llvm/prebuilt/linux-x86_64/bin/clang -target aarch64-linux-android21 -fomit-frame-pointer -march=armv8-a -fPIC
checking whether the C compiler works... no
configure: error: in `/mnt/d/hackerton/app/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/other_builds/libffi/arm64-v8a__ndk_target_21/libffi': configure: error: C compiler cannot create executables See `config.log' for more details

STDERR:

# Command failed: \['/usr/bin/python3', '-m', 'pythonforandroid.toolchain', 'create', '--dist_name=eyein', '--bootstrap=sdl2', '--requirements=python3,kivy,opencv-python-headless,numpy,tensorflow,tensorflow_hub,pyttsx3', '--arch=arm64-v8a', '--arch=armeabi-v7a', '--copy-libs', '--color=always', '--storage-dir=/mnt/d/hackerton/app/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a', '--ndk-api=21', '--ignore-setup-py', '--debug'\]

# ENVIRONMENT:

# SHELL = '/bin/bash'

# WSL2_GUI_APPS_ENABLED = '1'

# WSL_DISTRO_NAME = 'Ubuntu'

# NAME = 'DESKTOP-KNK30L5'

# PWD = '/mnt/d/hackerton/app'

# LOGNAME = 'qkrgusdnr'

# HOME = '/home/qkrgusdnr'

# LANG = 'C.UTF-8'

# WSL_INTEROP = '/run/WSL/65382_interop'

# LS_COLORS = 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.webp=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:'

# WAYLAND_DISPLAY = 'wayland-0'

# LESSCLOSE = '/usr/bin/lesspipe %s %s'

# TERM = 'xterm-256color'

# LESSOPEN = '| /usr/bin/lesspipe %s'

# USER = 'qkrgusdnr'

# DISPLAY = ':0'

# SHLVL = '1'

# XDG_RUNTIME_DIR = '/run/user/1000/'

# WSLENV = ''

# XDG_DATA_DIRS = '/usr/share/gnome:/usr/local/share:/usr/share:/var/lib/snapd/desktop'

# PATH = ('/home/qkrgusdnr/.buildozer/android/platform/apache-ant-1.9.4/bin:/home/qkrgusdnr/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/lib/wsl/lib:/mnt/d/Program '

'Files (x86)/VMware/VMware '
'Workstation/bin/:/mnt/c/Windows/system32:/mnt/c/Windows:/mnt/c/Windows/System32/Wbem:/mnt/c/Windows/System32/WindowsPowerShell/v1.0/:/mnt/c/Windows/System32/OpenSSH/:/mnt/c/Program '
'Files/nodejs/:/mnt/c/MinGW/bin:/mnt/c/Users/admin/AppData/Roaming/Microsoft/Windows/Start '
'Menu/Programs/Python '
'3.9:/mnt/c/Users/admin/AppData/Roaming/Microsoft/Windows/Start '
'Menu/Programs/Python '
'3.9/Script:/mnt/c/Users/admin/AppData/Roaming/Microsoft/Windows/Start '
'Menu/Programs/Python '
'3.12:/mnt/c/Users/admin/AppData/Roaming/Microsoft/Windows/Start '
'Menu/Programs/Python 3.12/Script:/mnt/c/Program '
'Files/Git/cmd:/Docker/host/bin:/mnt/c/Users/admin/AppData/Local/Programs/Python/Python312/Scripts/:/mnt/c/Users/admin/AppData/Local/Programs/Python/Python312/:/mnt/c/Users/admin/AppData/Roaming/Microsoft/Windows/Start '
'Menu/Programs/Python '
'3.12:/mnt/c/Users/admin/AppData/Local/Microsoft/WindowsApps:/mnt/c/Users/admin/AppData/Roaming/npm:/mnt/d/Users/admin/AppData/Local/Programs/Microsoft '
'VS Code/bin:/mnt/d/Program Files/JetBrains/PyCharm '
'2024.1/bin:/mnt/c/Users/admin/AppData/Local/Microsoft/WinGet/Packages/Schniz.fnm_Microsoft.Winget.Source_8wekyb3d8bbwe:/snap/bin')

# DBUS_SESSION_BUS_ADDRESS = 'unix:path=/run/user/1000/bus'

# HOSTTYPE = 'x86_64'

# PULSE_SERVER = 'unix:/mnt/wslg/PulseServer'

# \_ = '/home/qkrgusdnr/.local/bin/buildozer'

# OLDPWD = '/mnt/d/hackerton'

# PACKAGES_PATH = '/home/qkrgusdnr/.buildozer/android/packages'

# ANDROIDSDK = '/home/qkrgusdnr/.buildozer/android/platform/android-sdk'

# ANDROIDNDK = '/home/qkrgusdnr/.buildozer/android/platform/android-ndk-r25b'

# ANDROIDAPI = '31'

# ANDROIDMINAPI = '21'

# 

# Buildozer failed to execute the last command

# The error might be hidden in the log above this error

# Please read the full log, and search for it before

# raising an issue with buildozer itself.

# In case of a bug report, please add a full log with log_level = 2
帖子版权声明 1、本帖标题:kivy 'configure:错误:C 编译器无法创建可执行文件解决'查看 `config.log' 了解更多详细信息' 错误
    本站网址:http://xjnalaquan.com/
2、本网站的资源部分来源于网络,如有侵权,请联系站长进行删除处理。
3、会员发帖仅代表会员个人观点,并不代表本站赞同其观点和对其真实性负责。
4、本站一律禁止以任何方式发布或转载任何违法的相关信息,访客发现请向站长举报
5、站长邮箱:yeweds@126.com 除非注明,本帖由Jérôme Richard在本站《tensorflow》版块原创发布, 转载请注明出处!
最新回复 (0)
  • 我正在尝试调整 Tensorflow 的示例 UNet 以适应我的目的。主要区别在于,这个 UNet 采用 128x128 图像和蒙版,而我的图像是 512x512,蒙版是 100x100。我得到了这个...

    我正在尝试调整 Tensorflow 的示例 UNet 以适应我的目的。主要区别在于,此 UNet 采用 128x128 图像和蒙版,而我的图像为 512x512,蒙版为 100x100。

    当我尝试运行 model.fit 时出现此错误:

    ---------------------------------------------------------------------------
    ValueError                                Traceback (most recent call last)
    Cell In[137], line 2
          1 # %%wandb
    ----> 2 model_history = model.fit(train_dataset, epochs=EPOCHS,
          3                           steps_per_epoch=STEPS_PER_EPOCH,
          4                           validation_steps=VALIDATION_STEPS,
          5                         validation_data=test_dataset,
          6                         callbacks = [ShapeLoggingCallback()])
          7                           #callbacks=[CustomCallback()]) # original: callbacks=[DisplayCallback()])
    
    File /opt/conda/lib/python3.11/site-packages/keras/src/utils/traceback_utils.py:122, in filter_traceback.<locals>.error_handler(*args, **kwargs)
        119     filtered_tb = _process_traceback_frames(e.__traceback__)
        120     # To get the full stack trace, call:
        121     # `keras.config.disable_traceback_filtering()`
    --> 122     raise e.with_traceback(filtered_tb) from None
        123 finally:
        124     del filtered_tb
    
    File /opt/conda/lib/python3.11/site-packages/keras/src/utils/traceback_utils.py:122, in filter_traceback.<locals>.error_handler(*args, **kwargs)
        119     filtered_tb = _process_traceback_frames(e.__traceback__)
        120     # To get the full stack trace, call:
        121     # `keras.config.disable_traceback_filtering()`
    --> 122     raise e.with_traceback(filtered_tb) from None
        123 finally:
        124     del filtered_tb
    
    ValueError: as_list() is not defined on an unknown TensorShape.
    

    但是,我可以毫无问题地运行 model.predict,并且它会生成我期望从未经训练的模型得到的预测。

    这是我用来制作和训练模型的代码:

    base_model = tf.keras.applications.MobileNetV2(input_shape=[512, 512, 3], include_top=False)
    
    # Use the activations of these layers
    layer_names = [
        'block_1_expand_relu',   # 64x64
        'block_3_expand_relu',   # 32x32
        'block_6_expand_relu',   # 16x16
        'block_13_expand_relu',  # 8x8
        'block_16_project',      # 4x4
    ]
    base_model_outputs = [base_model.get_layer(name).output for name in layer_names]
    
    # Create the feature extraction model
    down_stack = tf.keras.Model(inputs=base_model.input, outputs=base_model_outputs)
    
    down_stack.trainable = False
    
    up_stack = [
        pix2pix.upsample(512, 3),  # 4x4 -> 8x8
        pix2pix.upsample(256, 3),  # 8x8 -> 16x16
        pix2pix.upsample(128, 3),  # 16x16 -> 32x32
        pix2pix.upsample(64, 3),   # 32x32 -> 64x64
    ]
    
    def unet_model(output_channels:int):
        inputs = tf.keras.layers.Input(shape=[512, 512, 3])
    
        # Downsampling through the model
        skips = down_stack(inputs)
        x = skips[-1]
        skips = reversed(skips[:-1])
    
        # Upsampling and establishing the skip connections
        for up, skip in zip(up_stack, skips):
            x = up(x)
            concat = tf.keras.layers.Concatenate()
            x = concat([x, skip])
    
        # This is the last layer of the model
        last = tf.keras.layers.Conv2DTranspose(
            filters=output_channels, kernel_size=3, strides=2,
            padding='same')  #64x64 -> 128x128
    
        x = last(x)
    
        return tf.keras.Model(inputs=inputs, outputs=x)
    
    OUTPUT_CLASSES = 5
    
    model = unet_model(output_channels=OUTPUT_CLASSES)
    model.compile(optimizer='adam',
                  loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
                  metrics=['accuracy'])
    
    model_history = model.fit(train_dataset, epochs=EPOCHS,
                            validation_data=test_dataset)
    

    这是我用来创建数据集的代码:

    # Get train and validation image paths
    train_tiles, train_masks = get_image_paths(train_patient_ids, patient_data)
    val_tiles, val_masks = get_image_paths(val_patient_ids, patient_data)
    test_tiles, test_masks = get_image_paths(test_patient_ids, patient_data)
    
    def load_image_and_mask(image_path, mask_path):
        image = tf.io.read_file(image_path)
        image = tf.image.decode_jpeg(image, channels=3)
        image = tf.image.resize(image, [512, 512])
        image.set_shape([512, 512, 3])
    
        mask = tf.io.read_file(mask_path)
        mask = tf.image.decode_png(mask, channels=1) # change channels?
        mask = tf.image.flip_up_down(mask)
        mask = tf.image.resize(mask, [100, 100])
        mask.set_shape([100, 100, 1])
    
        return image, mask
    
    def process_paths(image_path, mask_path):
        image, mask = tf.py_function(load_image_and_mask, [image_path, mask_path], [tf.float32, tf.float32])
        return image, mask
    
    BATCH_SIZE = 16  # Set the batch size as needed
    
    # train
    train_dataset = tf.data.Dataset.from_tensor_slices((train_tiles, train_masks))
    train_dataset = train_dataset.map(process_paths, num_parallel_calls=tf.data.experimental.AUTOTUNE)
    train_dataset = train_dataset.batch(BATCH_SIZE).prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
    
    # val
    val_dataset = tf.data.Dataset.from_tensor_slices((val_tiles, val_masks))
    val_dataset = val_dataset.map(process_paths, num_parallel_calls=tf.data.experimental.AUTOTUNE)
    val_dataset = val_dataset.batch(BATCH_SIZE).prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
    
    # test
    test_dataset = tf.data.Dataset.from_tensor_slices((test_tiles, test_masks))
    test_dataset = test_dataset.map(process_paths, num_parallel_calls=tf.data.experimental.AUTOTUNE)
    test_dataset = test_dataset.batch(BATCH_SIZE).prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
    
    

    我尝试检查每个批次的图像形状和蒙版形状。除了最后一个批次(只有一个图像)外,每个批次的图像形状为 (16, 512, 512, 3),蒙版形状为 (16, 100, 100, 1)。

    我尝试将此代码放入我的 process_paths 函数中(本教程中如此称呼它):

    image = tf.reshape(image, [512, 512, 3])
    
    ...
    mask = tf.reshape(mask, [100, 100, 1])
    

    我尝试了一下 up_stack 部分中的数字,但最终一无所获,因为我不理解那部分。我的假设是,既然我已经更改了输入大小,我必须更改模型层的输出大小,但我不太确定该怎么做。此外,我很困惑为什么在这种情况下我仍然可以运行 model.predict。

    我的 tensorflow 版本是 2.16.1

  • BENG 2月前 0 只看Ta
    引用 3

    始终将完整的错误消息(从单词“Traceback”开始)作为文本(不是屏幕截图,不是外部门户链接)放在问题中(而不是在评论中)。完整的错误/回溯中还有其他有用的信息。这也将更具可读性,并且更多人可以看到它 - 因此更多人可能会提供帮助。

  • 我正在尝试创建一个从用户列表派生的对象数组,并具有以下代码行:var usersEmail=group.map((user)=>{eMail : user.email, rOle : user.role});':' in '...

    我正在尝试创建一个从用户列表派生的对象数组,并有以下代码行:

    var usersEmail=group.map((user)=>{eMail : user.email, rOle : user.role});

    当我尝试保存代码时,'rOle : user.role' 中的 ':' 会导致意外的 ':' 错误。如果我使用此代码:

    var usersEmail=group.map((user)=>[user.email, user.role]);
    

    我创建了一个包含 2 个元素的数组,虽然可以工作,但效果并不理想。我最后的尝试是这样的:

    var usersEmail=group.map((user)=>[{eMail : user.email, rOle : user.role}]);
    

    它创建一个单元素数组,每个数组都包含正确的对象。我觉得我想要做的是完全可能的,但我没有正确的结构来做到这一点。

    感谢你的帮助。

  • 抱歉,我找到了解决方案。我只需要用 ( ) 将对象创建子句括起来,这样它就变成了:

  • 我想这就是你要找的:

    var usersEmail = group.map((user)=> ({eMail : user.email, rOle : user.role}));
    

    这里需要它们是因为您正在使用内联(匿名)函数,所以您需要一些东西来表明这确实是一个对象而不是一个函数。

    你也可以这样做:

     var usersEmail = group.map((user)=> {
        return {eMail : user.email, rOle : user.role};
     });
    
  • 我是机器学习的新手,我需要在我的 React Native 应用程序中检测用户的某些命令。为此,我准备了以下 ML 示例,测试结果如下

    我是机器学习的新手,我需要在我的 React Native 应用程序中检测用户的某些命令。为此,我准备了以下 ML 示例,测试结果完全满足我的需求(我正在缩短变体以避免太长)。

    import tensorflow as tf
    from tensorflow.keras.preprocessing.text import Tokenizer
    from tensorflow.keras.preprocessing.sequence import pad_sequences
    from tensorflow.keras.optimizers import Adam
    from tensorflow.keras.callbacks import ReduceLROnPlateau, EarlyStopping
    import matplotlib.pyplot as plt
    import numpy as np
    import pandas as pd
    
    # Expanded dataset
    commands = [
        "turn on camera",
        "open history",
        "other",
        "math"
    ]
    
    # Synonymous sentences
    command_variants = {
        "turn on camera": [
            "activate camera", "I want to open the camera", "I want to take a picture", 
        ],
        "open history": [
            "show history", "display previous records", "I want to look at past records", 
        ],
        "other": [
            "give a command", "What is the square root of 16?", "How do you solve quadratic equations?", 
        ],
        "math": [
            "What is 2 + 2?", "Calculate 5 - 3.", "Solve 6 * 7.", "What is 8 / 4?", 
        ]
    }
    
    
    sentences = []
    labels = []
    for label, (command, variants) in enumerate(command_variants.items()):
        sentences.append(command)
        labels.append(label)
        for variant in variants:
            sentences.append(variant)
            labels.append(label)
    
    tokenizer = Tokenizer(num_words=5000, oov_token="<OOV>")
    tokenizer.fit_on_texts(sentences)
    
    tokenizer_config = {
        "word_index": tokenizer.word_index,
        "index_word": {v: k for k, v in tokenizer.word_index.items()},
        "num_words": tokenizer.num_words,
        "oov_token": tokenizer.oov_token
    }
    
    with open('m_exports/commands_tokenizer.json', 'w') as f:
        json.dump(tokenizer_config, f)
    
    sequences = tokenizer.texts_to_sequences(sentences)
    max_len = 250
    padded_sequences = pad_sequences(sequences, maxlen=max_len, padding='post')
    
    label_counts = pd.Series(labels).value_counts()
    print("Label Distribution:")
    print(label_counts)
    
    model = tf.keras.models.Sequential([
        tf.keras.layers.Embedding(input_dim=5000, output_dim=128, input_length=max_len),
        tf.keras.layers.Bidirectional(tf.keras.layers.SimpleRNN(128)),
        tf.keras.layers.Dropout(0.5),
        tf.keras.layers.Dense(256, activation='relu'),
        tf.keras.layers.Dropout(0.5),
        tf.keras.layers.Dense(len(command_variants), activation='softmax')
    ])
    
    optimizer = Adam(learning_rate=0.001)
    model.compile(optimizer=optimizer, loss='sparse_categorical_crossentropy', metrics=['accuracy'])
    
    reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=3, min_lr=0.0001)
    early_stopping = EarlyStopping(monitor='val_loss', patience=5, restore_best_weights=True)
    
    history = model.fit(padded_sequences, np.array(labels), epochs=10, validation_split=0.2, callbacks=[reduce_lr, early_stopping])
    
    plt.figure(figsize=(12, 6))
    plt.subplot(1, 2, 1)
    plt.plot(history.history['loss'])
    plt.plot(history.history['val_loss'])
    plt.title('Model Loss')
    plt.xlabel('Epoch')
    plt.ylabel('Loss')
    plt.legend(['Train', 'Validation'])
    
    plt.subplot(1, 2, 2)
    plt.plot(history.history['accuracy'])
    plt.plot(history.history['val_accuracy'])
    plt.title('Model Accuracy')
    plt.xlabel('Epoch')
    plt.ylabel('Accuracy')
    plt.legend(['Train', 'Validation'])
    
    plt.show()
    
    print(f'Final Training Loss: {history.history["loss"][-1]}')
    print(f'Final Training Accuracy: {history.history["accuracy"][-1]}')
    print(f'Final Validation Loss: {history.history["val_loss"][-1]}')
    print(f'Final Validation Accuracy: {history.history["val_accuracy"][-1]}')
    
    test_sentences = [
        "turn on camera",
        "activate microphone",
        "show history",
        "give a command",
        "stop camera",
        "pause recording",
        "resume recording",
        "start new recording",
        "play last recording"
    ]
    
    test_sequences = tokenizer.texts_to_sequences(test_sentences)
    padded_test_sequences = pad_sequences(test_sequences, maxlen=max_len, padding='post')
    
    predictions = model.predict(padded_test_sequences)
    
    for sentence, prediction in zip(test_sentences, predictions):
        predicted_label = np.argmax(prediction)
        predicted_probability = prediction[predicted_label]
        print(f'Sentence: "{sentence}"')
        print(f'Predicted label index: {predicted_label}')
        print(f'Predicted label: {list(command_variants.keys())[predicted_label]}')
        print(f'Probability: {predicted_probability}')
        print(f'Raw probabilities: {prediction}')
    

    为了在我的 React Native 应用中运行此模型,我使用了 https://github.com/mrousavy/react-native-fast-tflite 包。我之前准备的 TFLite 文件可以运行,但检测率很低。

    为了改进它,我得到了AI的支持,结果有所改善,但是在获取TFLite输出时遇到了以下错误:

    W0000 00:00:1721746792.488252 2555976 tf_tfl_flatbuffer_helpers.cc:392] Ignored output_format.
    W0000 00:00:1721746792.488261 2555976 tf_tfl_flatbuffer_helpers.cc:395] Ignored drop_control_dependency.
    2024-07-23 17:59:52.556361: W tensorflow/compiler/mlir/lite/flatbuffer_export.cc:3463] TFLite interpreter needs to link Flex delegate in order to run the model since it contains the following Select TFop(s):
    Flex ops: FlexTensorListReserve, FlexTensorListSetItem, FlexTensorListStack
    Details:
        tf.TensorListReserve(tensor<2xi32>, tensor<i32>) -> (tensor<!tf_type.variant<tensor<?x128xf32>>>) : {device = ""}
        tf.TensorListSetItem(tensor<!tf_type.variant<tensor<?x128xf32>>>, tensor<i32>, tensor<?x128xf32>) -> (tensor<!tf_type.variant<tensor<?x128xf32>>>) : {device = "", resize_if_index_out_of_bounds = false}
        tf.TensorListStack(tensor<!tf_type.variant<tensor<?x128xf32>>>, tensor<2xi32>) -> (tensor<1x?x128xf32>) : {device = "", num_elements = 1 : i64}
    See instructions: https://www.tensorflow.org/lite/guide/ops_select
    

    我正在使用以下代码来导出 TFLite 文件:

    import tensorflow as tf
    
    # TensorFlow Lite modelini dönüştürme
    converter = tf.lite.TFLiteConverter.from_keras_model(model)
    converter.experimental_enable_resource_variables = True
    converter.target_spec.supported_ops = [
        tf.lite.OpsSet.TFLITE_BUILTINS, 
        tf.lite.OpsSet.SELECT_TF_OPS
    ]
    converter._experimental_lower_tensor_list_ops = False
    
    # Modeli dönüştür
    tflite_model = converter.convert()
    
    # TFLite modelini dosyaya kaydetme
    with open('m_exports/commands_model.tflite', 'wb') as f:
        f.write(tflite_model)
    

    并且在我的应用程序的加载阶段也出现此错误:

    Error: TFLite: Failed to allocate memory for input/output tensors! Status: unresolved-ops

    我应该怎么做才能将此模型导出为 TFLite 文件,以便可以在我的 React Native 应用程序中使用它?

    谢谢。

返回
作者最近主题: