Back to Tensorflow

AutoGraph reference

tensorflow/python/autograph/g3doc/reference/error_handling.md

2.21.07.9 KB
Original Source

AutoGraph reference

Index

Error handling

When an exception occurs in code generated by AutoGraph, the error message is augmented with information about the location in the original code, before conversion.

When an error occurs in a TensorFlow graph constructed using AutoGraph code, the stack trace which points to where the failing op was created is modified to point to the original code, before conversion.

Python execution errors

Python execution (or tracing) exceptions that are raised in AutoGraph code are caught and re-raised with an extended error message that contains references to the original code.

These functions are re-raised by @tf.function. If you use a try/catch the exception inside tf.function, you will obtain the original exception.

The exception traceback still contains the entire call stack, including frames corresponding to generated code.

AutoGraph tries to re-raise an exception of the same type as the original exception. This is usually possible for subclasses of Exception that do not define a custom __init__. For more complex exception types which define a custom constructor, AutoGraph raises a StagingError instead.

Among the distinctive features of the re-raised exception:

  • the exception traceback indicates the call stack of the exception, up to the first @tf.function
  • the error message includes references to the original code within the @tf.function
  • the references corresponding to converted code are marked with an asterisk (*)
  • the references corresponding to code which AutoGraph reached, but decided not to convert, are marked with a double asterisk (**)
  • the references corresponding to code that AutoGraph didn't reach at all have no marking

For example, the code below triggers an exception in the Python runtime, at graph construction time:

@tf.function
def f():
  tf.constant(1) + tf.constant(1.0)
f()

An excerpt of the exception that is raised is shown below:

Traceback (most recent call last):
  File "<ipython-input-10-1938a51c970d>", line 11, in <module>
    f()
  File "tensorflow/python/eager/def_function.py", line 417, in __call__
    self._initialize(args, kwds, add_initializers_to=initializer_map)
  ... more TensorFlow internal frames ...
TypeError: in converted code:

    <ipython-input-9-002fa22f79df>:8 f  *
        tf.constant(1) + tf.constant(1.0)
    tensorflow/python/ops/math_ops.py:900 binary_op_wrapper  **
        return func(x, y, name=name)
    ... more TensorFlow internal frames ...

    TypeError: Input 'y' of 'AddV2' Op has type float32 that does not match type int32 of argument 'x'.

Note: the exact appearance of the various parts in the error message may change in the future.

Let's look at the individual components of this exception.

The traceback of the exception indicates the location until the call to @tf.function, including any frames internal to TensorFlow:

Traceback (most recent call last):
  File "<ipython-input-10-1938a51c970d>", line 11, in <module>
    f()
  File "tensorflow/python/eager/def_function.py", line 417, in __call__
    self._initialize(args, kwds, add_initializers_to=initializer_map)
  File "tensorflow/python/eager/def_function.py", line 360, in _initialize
    *args, **kwds))
  File "tensorflow/python/eager/function.py", line 1688, in _get_concrete_function_internal_garbage_collected
    graph_function, _, _ = self._maybe_define_function(args, kwargs)
  File "tensorflow/python/eager/function.py", line 1992, in _maybe_define_function
    graph_function = self._create_graph_function(args, kwargs)
  File "tensorflow/python/eager/function.py", line 1878, in _create_graph_function
    capture_by_value=self._capture_by_value),
  File "tensorflow/python/framework/func_graph.py", line 791, in func_graph_from_py_func
    func_outputs = python_func(*func_args, **func_kwargs)
  File "tensorflow/python/eager/def_function.py", line 310, in wrapped_fn
    return weak_wrapped_fn().__wrapped__(*args, **kwds)
  File "tensorflow/python/framework/func_graph.py", line 781, in wrapper
    raise e.ag_error_metadata.to_exception(type(e))

The exception message includes the location inside the converted function f:

TypeError: in converted code:

    <ipython-input-9-002fa22f79df>:8 f  *
        tf.constant(1) + tf.constant(1.0)
    tensorflow/python/ops/math_ops.py:900 binary_op_wrapper
        return func(x, y, name=name)
    tensorflow/python/ops/math_ops.py:1198 _add_dispatch
        return gen_math_ops.add_v2(x, y, name=name)
    tensorflow/python/ops/gen_math_ops.py:549 add_v2
        "AddV2", x=x, y=y, name=name)
    tensorflow/python/framework/op_def_library.py:564 _apply_op_helper
        inferred_from[input_arg.type_attr]))

Notice the frame corresponding to the call of f. The function is converted, which is being indicated by the asterisk * character displayed next to f:

    <ipython-input-9-002fa22f79df>:8 f  *
        tf.constant(1) + tf.constant(1.0)

Lastly, the lower part includes the message that the exception originally reported:

    TypeError: Input 'y' of 'AddV2' Op has type float32 that does not match type int32 of argument 'x'.

Note: Typically, error messages raised by code internal to TensorFlow refers to arguments of the internal API that failed. Error messages raised by code internal to AutoGraph (that is, 'tensorflow/python/autograph') usually refer to symbols used in your code.

TensorFlow execution errors

TensorFlow execution errors are displayed normally, but the portions of the error message which correspond to user code contain references to the original code.

For example, the code below triggers an error in the TensorFlow runtime, at graph execution time:

@tf.function
def my_function():
  tf.Assert(tf.random.uniform(()) > 1.0, ['example error'])
my_function()

An excerpt of the exception that is subsequently raised is shown below:

Traceback (most recent call last):
  File "<ipython-input-16-af656fb445f0>", line 11, in <module>
    my_function()
  File "tensorflow/python/eager/def_function.py", line 435, in __call__
    return self._concrete_stateful_fn._filtered_call(canon_args, canon_kwds)
  File "tensorflow/python/eager/function.py", line 636, in _filtered_call
    self.captured_inputs)
  File "tensorflow/python/eager/function.py", line 734, in _call_flat
    outputs = self._inference_function.call(ctx, args)
  File "tensorflow/python/eager/function.py", line 460, in call
    ctx=ctx)
  File "tensorflow/python/eager/execute.py", line 68, in quick_execute
    six.raise_from(core._status_to_exception(e.code, message), None)
  File "<string>", line 3, in raise_from
InvalidArgumentError:  assertion failed: [example error]
    [[node Assert/Assert (defined at <ipython-input-16-af656fb445f0>:8) ]] [Op:__inference_my_function_79]

Notice the error message containing references to the location where the failing op was defined in the code (<ipython-input-16-af656fb445f0>:8):

InvalidArgumentError:  assertion failed: [example error]
    [[node Assert/Assert (defined at <ipython-input-16-af656fb445f0>:8) ]] [Op:__inference_my_function_79]

AutoGraph conversion exceptions

Within @tf.function, when AutoGraph fails to convert a function, it displays a warning message and attempts to run the function without conversion.

For example, the code below make a call to a Python generator function, which is not supported by AutoGraph:

def example_generator():
  yield 1

@tf.function
def f():
  for i in example_generator():
    print(i)

Calling f() will still run the code. AutoGraph will convert the function f, but skips the function example_generator. In addition, AutoGraph prints a warning to the console indicating that the function is called without being converted.

WARNING: Entity <function example_generator at 0x7f951b67f158> appears to be
a generator function. It will not be converted by AutoGraph.