We develop, prepare, and deploy TensorFlow fashions from R. However that doesn’t imply we don’t make use of documentation, weblog posts, and examples written in Python. We glance up particular performance within the official TensorFlow API docs; we get inspiration from different folks’s code.

Relying on how snug you’re with Python, there’s an issue. For instance: You’re purported to understand how *broadcasting* works. And maybe, you’d say you’re vaguely conversant in it: So when arrays have completely different shapes, some components get duplicated till their shapes match and … and isn’t R vectorized anyway?

Whereas such a world notion may fit generally, like when skimming a weblog submit, it’s not sufficient to grasp, say, examples within the TensorFlow API docs. On this submit, we’ll attempt to arrive at a extra precise understanding, and verify it on concrete examples.

Talking of examples, listed below are two motivating ones.

## Broadcasting in motion

The primary makes use of TensorFlow’s `matmul`

to multiply two tensors. Would you wish to guess the outcome – not the numbers, however the way it comes about generally? Does this even run with out error – shouldn’t matrices be two-dimensional (*rank*-2 tensors, in TensorFlow communicate)?

```
a <- tf$fixed(keras::array_reshape(1:12, dim = c(2, 2, 3)))
a
# tf.Tensor(
# [[[ 1. 2. 3.]
# [ 4. 5. 6.]]
#
# [[ 7. 8. 9.]
# [10. 11. 12.]]], form=(2, 2, 3), dtype=float64)
b <- tf$fixed(keras::array_reshape(101:106, dim = c(1, 3, 2)))
b
# tf.Tensor(
# [[[101. 102.]
# [103. 104.]
# [105. 106.]]], form=(1, 3, 2), dtype=float64)
c <- tf$matmul(a, b)
```

Second, here’s a “actual instance” from a TensorFlow Likelihood (TFP) github problem. (Translated to R, however protecting the semantics).

In TFP, we will have *batches* of distributions. That, per se, isn’t a surprise. However have a look at this:

```
library(tfprobability)
d <- tfd_normal(loc = c(0, 1), scale = matrix(1.5:4.5, ncol = 2, byrow = TRUE))
d
# tfp.distributions.Regular("Regular", batch_shape=[2, 2], event_shape=[], dtype=float64)
```

We create a batch of 4 regular distributions: every with a special *scale* (1.5, 2.5, 3.5, 4.5). However wait: there are solely two *location* parameters given. So what are their *scales*, respectively?

Fortunately, TFP builders Brian Patton and Chris Suter defined the way it works: TFP really does broadcasting – with distributions – similar to with tensors!

We get again to each examples on the finish of this submit. Our major focus will likely be to elucidate broadcasting as performed in NumPy, as NumPy-style broadcasting is what quite a few different frameworks have adopted (e.g., TensorFlow).

Earlier than although, let’s shortly evaluate just a few fundamentals about NumPy arrays: Find out how to index or *slice* them (indexing usually referring to single-element extraction, whereas slicing would yield – properly – slices containing a number of components); how one can parse their shapes; some terminology and associated background.

Although not difficult per se, these are the sorts of issues that may be complicated to rare Python customers; but they’re usually a prerequisite to efficiently making use of Python documentation.

Said upfront, we’ll actually prohibit ourselves to the fundamentals right here; for instance, we gained’t contact superior indexing which – similar to heaps extra –, could be seemed up intimately within the NumPy documentation.

## Few details about NumPy

### Fundamental slicing

For simplicity, we’ll use the phrases indexing and slicing roughly synonymously any further. The essential gadget here’s a *slice*, specifically, a `begin:cease`

construction indicating, for a single dimension, which vary of components to incorporate within the choice.

In distinction to R, Python indexing is zero-based, and the top index is unique:

And second, after we add tensors with shapes `(3, 3)`

and `(3,)`

, the 1-d tensor ought to get added to each row (not each column):

```
a <- tf$fixed(matrix(1:9, ncol = 3, byrow = TRUE), dtype = tf$float32)
a
# tf.Tensor(
# [[1. 2. 3.]
# [4. 5. 6.]
# [7. 8. 9.]], form=(3, 3), dtype=float32)
b <- tf$fixed(c(100, 200, 300))
b
# tf.Tensor([100. 200. 300.], form=(3,), dtype=float32)
a + b
# tf.Tensor(
# [[101. 202. 303.]
# [104. 205. 306.]
# [107. 208. 309.]], form=(3, 3), dtype=float32)
```

Now again to the preliminary `matmul`

instance.

## Again to the puzzles

The documentation for matmul says,

The inputs should, following any transpositions, be tensors of rank >= 2 the place the internal 2 dimensions specify legitimate matrix multiplication dimensions, and any additional outer dimensions specify matching batch measurement.

So right here (see code slightly below), the internal two dimensions look good – `(2, 3)`

and `(3, 2)`

– whereas the one (one and solely, on this case) batch dimension exhibits mismatching values `2`

and `1`

, respectively.

A case for broadcasting thus: Each “batches” of `a`

get matrix-multiplied with `b`

.

```
a <- tf$fixed(keras::array_reshape(1:12, dim = c(2, 2, 3)))
a
# tf.Tensor(
# [[[ 1. 2. 3.]
# [ 4. 5. 6.]]
#
# [[ 7. 8. 9.]
# [10. 11. 12.]]], form=(2, 2, 3), dtype=float64)
b <- tf$fixed(keras::array_reshape(101:106, dim = c(1, 3, 2)))
b
# tf.Tensor(
# [[[101. 102.]
# [103. 104.]
# [105. 106.]]], form=(1, 3, 2), dtype=float64)
c <- tf$matmul(a, b)
c
# tf.Tensor(
# [[[ 622. 628.]
# [1549. 1564.]]
#
# [[2476. 2500.]
# [3403. 3436.]]], form=(2, 2, 2), dtype=float64)
```

Let’s shortly verify this actually is what occurs, by multiplying each batches individually:

```
tf$matmul(a[1, , ], b)
# tf.Tensor(
# [[[ 622. 628.]
# [1549. 1564.]]], form=(1, 2, 2), dtype=float64)
tf$matmul(a[2, , ], b)
# tf.Tensor(
# [[[2476. 2500.]
# [3403. 3436.]]], form=(1, 2, 2), dtype=float64)
```

Is it too bizarre to be questioning if broadcasting would additionally occur for matrix dimensions? E.g., might we strive `matmul`

ing tensors of shapes `(2, 4, 1)`

and `(2, 3, 1)`

, the place the `4 x 1`

matrix can be broadcast to `4 x 3`

? – A fast check exhibits that no.

To see how actually, when coping with TensorFlow operations, it pays off overcoming one’s preliminary reluctance and truly seek the advice of the documentation, let’s strive one other one.

Within the documentation for matvec, we’re instructed:

Multiplies matrix a by vector b, producing a * b.

The matrix a should, following any transpositions, be a tensor of rank >= 2, with form(a)[-1] == form(b)[-1], and form(a)[:-2] in a position to broadcast with form(b)[:-1].

In our understanding, given enter tensors of shapes `(2, 2, 3)`

and `(2, 3)`

, `matvec`

ought to carry out two matrix-vector multiplications: as soon as for every batch, as listed by every enter’s leftmost dimension. Let’s verify this – to this point, there isn’t a broadcasting concerned:

```
# two matrices
a <- tf$fixed(keras::array_reshape(1:12, dim = c(2, 2, 3)))
a
# tf.Tensor(
# [[[ 1. 2. 3.]
# [ 4. 5. 6.]]
#
# [[ 7. 8. 9.]
# [10. 11. 12.]]], form=(2, 2, 3), dtype=float64)
b = tf$fixed(keras::array_reshape(101:106, dim = c(2, 3)))
b
# tf.Tensor(
# [[101. 102. 103.]
# [104. 105. 106.]], form=(2, 3), dtype=float64)
c <- tf$linalg$matvec(a, b)
c
# tf.Tensor(
# [[ 614. 1532.]
# [2522. 3467.]], form=(2, 2), dtype=float64)
```

Doublechecking, we manually multiply the corresponding matrices and vectors, and get:

```
tf$linalg$matvec(a[1, , ], b[1, ])
# tf.Tensor([ 614. 1532.], form=(2,), dtype=float64)
tf$linalg$matvec(a[2, , ], b[2, ])
# tf.Tensor([2522. 3467.], form=(2,), dtype=float64)
```

The identical. Now, will we see broadcasting if `b`

has only a single batch?

```
b = tf$fixed(keras::array_reshape(101:103, dim = c(1, 3)))
b
# tf.Tensor([[101. 102. 103.]], form=(1, 3), dtype=float64)
c <- tf$linalg$matvec(a, b)
c
# tf.Tensor(
# [[ 614. 1532.]
# [2450. 3368.]], form=(2, 2), dtype=float64)
```

Multiplying each batch of `a`

with `b`

, for comparability:

```
tf$linalg$matvec(a[1, , ], b)
# tf.Tensor([ 614. 1532.], form=(2,), dtype=float64)
tf$linalg$matvec(a[2, , ], b)
# tf.Tensor([[2450. 3368.]], form=(1, 2), dtype=float64)
```

It labored!

Now, on to the opposite motivating instance, utilizing *tfprobability*.

### Broadcasting all over the place

Right here once more is the setup:

```
library(tfprobability)
d <- tfd_normal(loc = c(0, 1), scale = matrix(1.5:4.5, ncol = 2, byrow = TRUE))
d
# tfp.distributions.Regular("Regular", batch_shape=[2, 2], event_shape=[], dtype=float64)
```

What’s going on? Let’s examine *location* and *scale* individually:

```
d$loc
# tf.Tensor([0. 1.], form=(2,), dtype=float64)
d$scale
# tf.Tensor(
# [[1.5 2.5]
# [3.5 4.5]], form=(2, 2), dtype=float64)
```

Simply specializing in these tensors and their shapes, and having been instructed that there’s broadcasting occurring, we will cause like this: Aligning each shapes on the suitable and increasing `loc`

’s form by `1`

(on the left), now we have `(1, 2)`

which can be broadcast with `(2,2)`

– in matrix-speak, `loc`

is handled as a row and duplicated.

That means: We’ve two distributions with imply (0) (one in every of scale (1.5), the opposite of scale (3.5)), and in addition two with imply (1) (corresponding scales being (2.5) and (4.5)).

Right here’s a extra direct solution to see this:

```
d$imply()
# tf.Tensor(
# [[0. 1.]
# [0. 1.]], form=(2, 2), dtype=float64)
d$stddev()
# tf.Tensor(
# [[1.5 2.5]
# [3.5 4.5]], form=(2, 2), dtype=float64)
```

Puzzle solved!

Summing up, broadcasting is straightforward “in concept” (its guidelines are), however might have some training to get it proper. Particularly along side the truth that capabilities / operators do have their very own views on which elements of its inputs ought to broadcast, and which shouldn’t. Actually, there isn’t a method round wanting up the precise behaviors within the documentation.

Hopefully although, you’ve discovered this submit to be a superb begin into the subject. Perhaps, just like the creator, you are feeling such as you would possibly see broadcasting occurring wherever on this planet now. Thanks for studying!