You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ValueError: Node 'gradients/InceptionResnetV1/Bottleneck/BatchNorm/cond/FusedBatchNorm_1_grad/FusedBatchNormGrad' has an _output_shapes attribute inconsistent with the GraphDef for output #3: Dimension 0 in both shapes must be equal, but are 0 and 512. Shapes are [0] and [512].
#1246
C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\venv\Scripts\python.exe C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\detection.py
2023-12-28 18:11:21.029652: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0.
WARNING:tensorflow:From C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\venv\Lib\site-packages\keras\src\losses.py:2976: The name tf.losses.sparse_softmax_cross_entropy is deprecated. Please use tf.compat.v1.losses.sparse_softmax_cross_entropy instead.
WARNING:tensorflow:From C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\detection.py:13: The name tf.disable_eager_execution is deprecated. Please use tf.compat.v1.disable_eager_execution instead.
2023-12-28 18:11:30.795362: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: SSE SSE2 SSE3 SSE4.1 SSE4.2 AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
WARNING:tensorflow:From C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\venv\Lib\site-packages\tensorflow\python\util\dispatch.py:1260: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Deprecated in favor of operator or tf.math.divide.
2023-12-28 18:11:30.961375: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:388] MLIR V1 optimization pass is not enabled
WARNING:tensorflow:From C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\align\detect_face.py:172: The name tf.nn.xw_plus_b is deprecated. Please use tf.compat.v1.nn.xw_plus_b instead.
Model directory: models
Metagraph file: model-20180402-114759.meta
Checkpoint file: model-20180402-114759.ckpt-275
WARNING:tensorflow:From C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\facenett\facenet.py:382: The name tf.train.import_meta_graph is deprecated. Please use tf.compat.v1.train.import_meta_graph instead.
Traceback (most recent call last):
File "C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\venv\Lib\site-packages\tensorflow\python\framework\importer.py", line 511, in _import_graph_def_internal
results = c_api.TF_GraphImportGraphDefWithResults(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
tensorflow.python.framework.errors_impl.InvalidArgumentError: Node 'gradients/InceptionResnetV1/Bottleneck/BatchNorm/cond/FusedBatchNorm_1_grad/FusedBatchNormGrad' has an _output_shapes attribute inconsistent with the GraphDef for output #3: Dimension 0 in both shapes must be equal, but are 0 and 512. Shapes are [0] and [512].
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\detection.py", line 132, in
run('models', 'models/facemodel.pkl')
File "C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\detection.py", line 97, in run
face_recognition = Recognition(model_checkpoint, classifier)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\facenett\face_contrib.py", line 31, in init
self.encoder = Encoder(facenet_model_checkpoint)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\facenett\face_contrib.py", line 71, in init
load_model(facenet_model_checkpoint)
File "C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\facenett\facenet.py", line 382, in load_model
saver = tf.compat.v1.train.import_meta_graph(os.path.join(model_exp, meta_file), input_map=input_map)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\venv\Lib\site-packages\tensorflow\python\training\saver.py", line 1583, in import_meta_graph
return _import_meta_graph_with_return_elements(meta_graph_or_file,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\venv\Lib\site-packages\tensorflow\python\training\saver.py", line 1604, in _import_meta_graph_with_return_elements
meta_graph.import_scoped_meta_graph_with_return_elements(
File "C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\venv\Lib\site-packages\tensorflow\python\framework\meta_graph.py", line 785, in import_scoped_meta_graph_with_return_elements
imported_return_elements = importer.import_graph_def(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\venv\Lib\site-packages\tensorflow\python\util\deprecation.py", line 588, in new_func
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\venv\Lib\site-packages\tensorflow\python\framework\importer.py", line 407, in import_graph_def
return _import_graph_def_internal(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\venv\Lib\site-packages\tensorflow\python\framework\importer.py", line 516, in _import_graph_def_internal
raise ValueError(str(e))
ValueError: Node 'gradients/InceptionResnetV1/Bottleneck/BatchNorm/cond/FusedBatchNorm_1_grad/FusedBatchNormGrad' has an _output_shapes attribute inconsistent with the GraphDef for output #3: Dimension 0 in both shapes must be equal, but are 0 and 512. Shapes are [0] and [512].
Process finished with exit code 1
The text was updated successfully, but these errors were encountered:
def load_model(model, input_map=None):
# Check if the model is a model directory (containing a metagraph and a checkpoint file)
# or if it is a protobuf file with a frozen graph
model_exp = os.path.expanduser(model)
if (os.path.isfile(model_exp)):
print('Model filename: %s' % model_exp)
with gfile.FastGFile(model_exp, 'rb') as f:
graph_def = tf.compat.v1.GraphDef()
graph_def.ParseFromString(f.read())
tf.import_graph_def(graph_def, input_map=input_map, name='')
else:
print('Model directory: %s' % model_exp)
meta_file, ckpt_file = get_model_filenames(model_exp)
C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\venv\Scripts\python.exe C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\detection.py
2023-12-28 18:11:21.029652: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable
TF_ENABLE_ONEDNN_OPTS=0
.WARNING:tensorflow:From C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\venv\Lib\site-packages\keras\src\losses.py:2976: The name tf.losses.sparse_softmax_cross_entropy is deprecated. Please use tf.compat.v1.losses.sparse_softmax_cross_entropy instead.
WARNING:tensorflow:From C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\detection.py:13: The name tf.disable_eager_execution is deprecated. Please use tf.compat.v1.disable_eager_execution instead.
2023-12-28 18:11:30.795362: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: SSE SSE2 SSE3 SSE4.1 SSE4.2 AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
WARNING:tensorflow:From C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\venv\Lib\site-packages\tensorflow\python\util\dispatch.py:1260: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Deprecated in favor of operator or tf.math.divide.
2023-12-28 18:11:30.961375: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:388] MLIR V1 optimization pass is not enabled
WARNING:tensorflow:From C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\align\detect_face.py:172: The name tf.nn.xw_plus_b is deprecated. Please use tf.compat.v1.nn.xw_plus_b instead.
Model directory: models
Metagraph file: model-20180402-114759.meta
Checkpoint file: model-20180402-114759.ckpt-275
WARNING:tensorflow:From C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\facenett\facenet.py:382: The name tf.train.import_meta_graph is deprecated. Please use tf.compat.v1.train.import_meta_graph instead.
Traceback (most recent call last):
File "C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\venv\Lib\site-packages\tensorflow\python\framework\importer.py", line 511, in _import_graph_def_internal
results = c_api.TF_GraphImportGraphDefWithResults(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
tensorflow.python.framework.errors_impl.InvalidArgumentError: Node 'gradients/InceptionResnetV1/Bottleneck/BatchNorm/cond/FusedBatchNorm_1_grad/FusedBatchNormGrad' has an _output_shapes attribute inconsistent with the GraphDef for output #3: Dimension 0 in both shapes must be equal, but are 0 and 512. Shapes are [0] and [512].
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\detection.py", line 132, in
run('models', 'models/facemodel.pkl')
File "C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\detection.py", line 97, in run
face_recognition = Recognition(model_checkpoint, classifier)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\facenett\face_contrib.py", line 31, in init
self.encoder = Encoder(facenet_model_checkpoint)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\facenett\face_contrib.py", line 71, in init
load_model(facenet_model_checkpoint)
File "C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\facenett\facenet.py", line 382, in load_model
saver = tf.compat.v1.train.import_meta_graph(os.path.join(model_exp, meta_file), input_map=input_map)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\venv\Lib\site-packages\tensorflow\python\training\saver.py", line 1583, in import_meta_graph
return _import_meta_graph_with_return_elements(meta_graph_or_file,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\venv\Lib\site-packages\tensorflow\python\training\saver.py", line 1604, in _import_meta_graph_with_return_elements
meta_graph.import_scoped_meta_graph_with_return_elements(
File "C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\venv\Lib\site-packages\tensorflow\python\framework\meta_graph.py", line 785, in import_scoped_meta_graph_with_return_elements
imported_return_elements = importer.import_graph_def(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\venv\Lib\site-packages\tensorflow\python\util\deprecation.py", line 588, in new_func
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\venv\Lib\site-packages\tensorflow\python\framework\importer.py", line 407, in import_graph_def
return _import_graph_def_internal(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Choso\PycharmProjects\AttendanceSystemDemo\venv\Lib\site-packages\tensorflow\python\framework\importer.py", line 516, in _import_graph_def_internal
raise ValueError(str(e))
ValueError: Node 'gradients/InceptionResnetV1/Bottleneck/BatchNorm/cond/FusedBatchNorm_1_grad/FusedBatchNormGrad' has an _output_shapes attribute inconsistent with the GraphDef for output #3: Dimension 0 in both shapes must be equal, but are 0 and 512. Shapes are [0] and [512].
Process finished with exit code 1
The text was updated successfully, but these errors were encountered: