# Lecture 7: Introduction to TensorFlow

Just another WordPress site

### Lecture 7: Introduction to TensorFlow

TensorFlow mathematical operations as opposed to NumPy operations Okay, so let us actually see how this works in code So we gonna do three things We’re going to create weights including initialization We’re going to create a placeholder variable for our input x, and then we’re going to build our flow graph So how does this look in code? We’re gonna import our TensorFlow package, we’re gonna build a python variable b, that is a TensorFlow variable Taking in initial zeros of size 100 A vector of 100 values Our w is going to be a TensorFlow variable taking uniformly distributed values between -1 and 1 of shapes of 184 by 100 We’re going to create a placeholder for our input data that doesn’t take in any initial values, it just takes in a data type 32 bit floats, as well as a shape Now we’re in position to actually build our flow graph We’re going to express h as the TensorFlow ReLU, of the TensorFlow matrix multiplication of x and w, and we add b So you can actually see that the form of that line, when we build our h, essentially looks exactly the same as how it would look like a NumPy, except we’re calling on our TensorFlow mathematical operations And that is absolutely essential, because up to this point, we are not actually manipulating any data, we’re only building symbols Inside our graph No data is actually moving in through our system yet You can not print off h, and actually see the value it expresses First and foremost, because x is just a place holder, it doesn’t have any real data in it yet But, even if x wasn’t, you can not print h until we run a tune We are just building a backbone for our model But, you might wonder now, where is the graph? If you look at the slide earlier, I didn’t build a separate node for this matrix multiplication node, and a different node for add, and a different node for ReLU Well, ReLU is the h We’ve only defined one line, but I claim that we have all of these nodes in our graph So if you’re actually try to analyze what’s happening in the graph, what we’re gonna do, and there are not too many reasons for you to do this when you’re actually programming a TensorFlow operation But if I’m gonna call on my default graph, and then I call get_operations on it, I see all of the nodes in my graph and there are a lot of things going on here You can see in the top three lines that we have three separate nodes just to define what is this concept of zeroes There are no values initially assigned yet to our b, but the graph is getting ready to take in those values We see that we have all of these other nodes just to define what the random uniform distribution is And on the right column we see we have another node for Variable_1 that is probably going to be our w And then at the bottom four lines we actually see the nodes as they appear in our figure, the placeholder, the matrix multiplication, the addition and the ReLU So in fact, the figure that we’re presenting on the board is simple for what TensorFlow graphs look like There are a lot of things going behind the scenes that you don’t really need to interface with as a programmer But it is extremely important to keep in mind that this is the level of abstraction that TensorFlow is working with above the Python code This is what is actually going to be computed in your graph And it is also interesting to see that if you look at the last node, ReLU It is pointing to the same object in memory as the h variable that we defined above Both of them are operations referring to the same thing So in the code before, what this h actually stands for is the last current node in the graph that we built So great We define, question? So the question was about how we’re deciding what the values are, and the types This is purely arbitrary choice, we’re just showing an example, It’s not related to, it’s just part of our example Okay Great, so we’ve defined a graph And the next question is how do we actually run it? So the way you run graphs in TensorFlow is you deploy it in something called a session A session is a binding to a particular execution context like a CPU or a GPU So we’re going to take the graph that we built, and we’re going to deploy it on to a CPU or a GPU And you might actually be interested to know that Google is developing their own integrated circuit called a tensor processing unit, just to make tensor computation extremely quickly It’s in fact orders of magnitude more quick then even a GPU, and they did use a tender alpha go match against lissdell So the session is any like hardware environment that supports the execution of all the operations in your graph So that’s how you deploy a graph, great So lets see how this is run in code We’re going to build a session object, and we’re going to call run on two arguments, fetches and feeds