Tensorflow Interview Questions and Answers

1) What is TensorFlow?

Ans: TensorFlow is a Python-based library which is used for creating machine learning applications. It is a low-level toolkit to perform complex mathematics. It offers users the customizability option to build experimental learning architectures. It also helps the users to work with them, and to turn them into running software. It was initially created by researchers and engineers working on the Google Brain Team, and It became open source in 2015.

TensorFlow is made up of two words Tensor and Flow; a tensor is known as data representation for multi-dimensional array and flow means a series of operations performed on tensors.

2. What are Tensors?

Ans: Tensors are higher dimensional arrays which are used in computer programming to represent a multitude of data in the form of numbers. There are other n-d array libraries available on the internet like Numpy but TensorFlow stands apart from them as it offers methods to create tensor functions and automatically compute derivatives.

3. What is a TensorBoard?

Ans:TensorBoard, a suit of visualizing tools, is an easy solution to Tensorflow offered by the creators that lets you visualize the graphs, plot quantitative metrics about the graph with additional data like images to pass through it.

4.. What are the features of TensorFlow?

Ans:Tensorflow has APIs for Matlab, and C++ and has a wide language support. With each day passing by, researchers working on making it more better and recently in the latest Tensorflow Summit, tensorflow.js, a javascript library for training and deploying machine learning models has been introduced.

5.How many types of Tensors are there?

Ans: There are three types of Tensors used to create neural network models:

  • Constant Tensor
    Constant Tensors are used as constants, as the name suggests. They create a node that takes a value and does not change it. A constant can be created using tf.constant.
    tf.constant(value, dtype=None, shape=None, name=’Const’, verify_shape=False)
    It accepts the five arguments.
  • Variable Tensor
    Variable Tensors are the nodes which provide their current value as output. It means that they can retain their value over multiple executions of a graph.
  • Place Holder Tensor
    Placeholders Tensors are essential than variables. These are used to assign data in a later time. Placeholders are the nodes whose value is fed at the time of execution. Assume, we have inputs to our network which are dependent on some external data. Also, we do not want our graph to depend on any real value while developing the graph, then Placeholders are useful datatype. We can even build a graph without any data.
    Therefore, placeholders do not require any initial value. They only need a datatype (such as float32) and a tensor shape, so the graph still knows what to compute with even though it does not have any stored values.

6.What are the advantages of TensorFlow?

Ans: Some of the main advantages of TensorFlow are given below:

  • It can be easily trained on CPU as well as GPU for distributed computing.
  • It has auto differentiation capabilities.
  • It has platform flexibility.
  • It is easily customizable and open-source.
  • It has advanced support for threads, asynchronous computations, and queues.

7.What are the three working components of TensorFlow Architecture?

Ans: TensorFlow architecture works in three parts:

  • Preprocessing the data
  • Building the model
  • Training and estimating the model

8.Explain few options to load data into TensorFlow.

Ans: Loading the data into TensorFlow is the first step before training a machine learning algorithm. There are two ways to load the data:

  • Load data in memory
    It is the easiest method. All the data is loaded into memory as a single array. One can write a Python code which is unrelated to TensorFlow.
  • Tensorflow data pipeline
    TensorFlow has built-in APIs which help to load the data, perform the operations, and feed the machine learning algorithm easily. This method is mostly used when there is a large dataset.

9.What do you know about TensorFlow abstractions?

Ans:

  • TensorFlow contains abstraction libraries such as TF-Slim and kereas, which provides simplified high-level access to TensorFlow. Such abstractions help to streamline the construction of data flow graphs.
  • TensorFlow abstractions not only help to make the code cleaner but also reduce the length of codes drastically. As a result, it significantly reduces development time.

10.What are the APIs used outside the TensorFlow project?

Ans: There are a few APIs used outside the TensorFlow project, which are:

  • TFLearnTFLearn provides a high-level API which makes neural network building and training fast and easy. This API is fully compatible with Tensorflow. Its API can be indicated as tf.contrib.learn.
  • TensorLayerTensorLayer is a TensorFlow-based deep learning and reinforcement learning library. It is designed for researchers and engineers. It provides an extensive collection of customizable neural layers/functions which are crucial for building real-world AI applications.
  • PrettyTensorPretty Tensor delivers high-level builder API for TensorFlow. It offers thin wrappers on Tensors so that you can easily build multi-layer neural networks.
    Pretty Tensor provides a set of objects that behave likes Tensors. It also supports a chainable object syntax to define neural networks and other layered architectures in TensorFlow quickly.
  • SonnetSonnet is a library which is built on top of TensorFlow for creating complex neural n

11.What are the benefits of TensorFlow over other libraries? Explain.

Ans:

There are many benefits of TensorFlow over other libraries which are given below:

  • Scalability
    TensorFlow provides easily scaled machine learning applications and infrastructure.
  • Visualization of Data
    Visualizing the graph is very straight-forward in TensorFlow. TensorBoard(a suite of visualization tools) is used to visualize TensorFlow graphs.
  • Debugging Facility
    is a specialized debugger for TensorFlow. It lets us view the internal structure and states of running TensorFlow graphs during training and inference.
  • Pipelining
    TensorFlow’s Dataset module tf.data is used to build efficient pipelines for images and text.

12.Where can you run a TensorFlow?

Ans: TensorFlow can run on different platforms:

  • Operating System such as Windows, OS, and Linux
  • Cloud Web Service
  • Mobile OS like IOS and Android

13.What is the difference b/w Session.run() and Tensor.eval() ?

Ans: In tensorflow you create graphs and pass values to that graph. Graph does all the hardwork and generate the output based on the configuration that you have made in the graph. Now When you pass values to the graph then first you need to create a tensorflow session.

tf.Session()

Once session is initialized then you are supposed to use that session because all the variables and settings are now part of the session.

So, there are two ways to pass external values to the graph so that graph accepts them. One is to call the .run() while you are using the session being executed. Other way which is basically a shortcut to this is to use .eval(). I said shortcut because the full form of .eval() is

tf.get_default_session().run(values)

At the place of values.eval() run tf.get_default_session().run(values). You must get the same behavior, here what eval is doing, is using the default session and then executing run().

14.Describe the common steps to most TensorFlow algorithms?

  • Import data, generate data, or setting a data-pipeline through placeholders.
  • Feed the data through the computational graph.
  • Evaluate output on the loss function.
  • Use backpropagation to modify the variables.
  • Repeat until stopping condition.

15.What do you know about TensorFlow Managers?

Ans: The TensorFlow managers are responsible for loading, unloading, lookup, and lifetime management of all servable objects via their loaders. TensorFlow Managers control the full lifecycle of Servables, including:

  • Loading Servables
  • Serving Servables
  • Unloading Servables

It is an abstract class. Its syntax is:

  1. #include <manager.h>  

16.What are TensorFlow servables? Also, explain TensorFlow Serving.

The clients use some objects to perform the computations, and these objects are known as Servables. The size of the servable is flexible. A single servable might contain anything from a lookup table to a single model to a tuple of inference models. These servables are the central rudimentary units in TensorFlow Serving.

TensorFlow Serving is designed for production environments. It is a flexible, high-performance serving system used for machine learning models. TensorFlow Serving easily deploys new algorithms and experiments while keeping the same server architecture and APIs. TensorFlow Serving provides out-of-the-box integration with TensorFlow models. It can also be easily extended to serve other types of models and data whenever required.

17.What are the use cases of TensorFlow?

Ans: Tensorflow is an essential tool for deep learning. It has mainly five use cases, they are:

  • Text-Based Applications
  • Voice/Sound Recognition
  • Time Series
  • Image Recognition
  • Video Detection

18.How does TensorFlow use Python API?

Ans: Python is the primary language when it comes to TensorFlow and its development. It is the first and most recognizable language supported by TensorFlow and still supporting most of the features. It seems like the functionality of TensorFlow was initially defined in Python and later moved to C++.

19.What is Image Dashboard in TensorBoard?

Ans: The Image Dashboard is used to display png files that were saved via a tf.summary.image. The dashboard is configured in such a way so that each row corresponds to a different tag, and each column corresponds to a run. The image dashboard also supports arbitrary pngs which can be used to embed custom visualizations (e.g.,matplotlib scatterplots) into TensorBoard. This dashboard always shows the latest image for each tag.


20.What do you know about Audio Dashboard?

The Audio Dashboard is used to embed playable audio widgets for audio stored via a tf.summary.audio. The dashboard is configured so that each row corresponds to a different tag, and each column corresponds to a run. The Audio dashboard always embeds the latest audio for each tag.


21.Describe Graph Explorer in TensorFlow?

Ans: The Graph Explorer can be used while visualizing a TensorBoard graph. It is also responsible for enabling inspection of the TensorFlow model. To get the best use of the graph visualizer, one should use name scopes to group the ops in a graph hierarchically. Otherwise, the graph may be challenging to decipher.

22.What are some of the important parameters to consider when implementing a random forest algorithm in TensorFlow?

Ans: There are six main parameters you should think about and plan when implementing a random forest algorithm in TensorFlow:

  • Number of inputs
  • Feature count
  • Number of samples per batch
  • Total number of training steps
  • Number of trees
  • Maximum number of nodes

23.What are some of the numerical and categorical loss functions supported when working with TensorFlow?

Ans: Following are some of the widely used numerical and categorical loss functions supported when working with TensorFlow:

Numerical loss functions:

  • L1 loss
  • L2 loss
  • Pseudo-Huber loss

Categorical loss functions:

  • Hinge loss
  • Cross-entropy loss
  • Sigmoid-entropy loss
  • Weighted cross-entropy loss

Each of the loss functions mentioned above has a specific use based on the input data and the type of modeling involved.

24.What are some of the parameters to consider when implementing the Word2vec algorithm in TensorFlow?

Ans: The Word2vec algorithm is used to compute the vector representations of words from an input dataset.

There are six parameters that have to be considered:

  • embedding_size: Denotes the dimension of the embedding vector
  • max_vocabulary_size: Denotes the total number of unique words in the vocabulary
  • min_occurrence: Removes all words that do not appear at least ‘n’ number of times
  • skip_window: Denotes words to considered or not for processing
  • num_skips: Denotes the number of times you can reuse an input to generate a label
  • num_sampled: Denotes the number of negative examples to sample from the input

25.What is the use of ArrayFlow and FeedDictFlow in TensorFlow?

Ans: ArrayFlow is used to convert array entities into tensors and store them automatically in a queue data structure.

  • data_flow.ArrayFlow()

FeedDictFlow is used to generate a stream of batch data from the input dataset. The working is based on two queues, where one is used to generate batches and the other is used to load the data and apply preprocessing methods on it.

  • data_flow.FeedDictFlow()

26.What are some of the commonly used optimizers when training a model in TensorFlow?

Ans: You can use many optimizers based on various factors, such as the learning rate, performance metric, dropout, gradient, and more.

Following are some of the popular optimizers:

  • AdaDelta
  • AdaGrad
  • Adam
  • Momentum
  • RMSprop
  • Stochastic Gradient Descent

27.How is the weighted standard error computed in TensorFlow?

Ans: The weighted standard error is a standard metric that is used to compute the coefficient of determination when working with a linear regression model.

It provides an easy way to evaluate the model and can be used as shown below:

# Used along with TFLearn estimators

weighted_r2 = WeightedR2()
regression = regression(net, metric=weighted_r2)

28.What is the simple syntax that can be used to convert a NumPy array into a tensor?

Ans: There are two ways a NumPy array can be converted into a tensor when working with Python. The first one is as follows:

  • train.shuffle_batch()

And the second way is:

  • convert_to_tensor(tensor1d, dtype = tf.float64)

The high-level code offers a good amount of readability and ease-of-use and denoted by the above piece of code.

29.What is model quantization in TensorFlow?

Ans: The process of handling the complexity that follows when optimizing inferences can be greatly minimized using TensorFlow. Model quantization is primarily used to reduce the representation of weights and also for the storage and computation of the activation function.

Using model quantization provides users with two main advantages:

  • Support for a variety of CPU platforms
  • SIMD instruction handling capabilities

30.What are activation functions in TensorFlow?

Ans: Activation functions are functions applied to the output side of a neural network that serves to be the input of the next layer. It forms a very important part of neural networks as it provides nonlinearity that sets apart a neural network from logistic regression.