Home:ALL Converter>Leaving dimension unaffected with numpy tensordot

Leaving dimension unaffected with numpy tensordot

Ask Time:2017-07-21T09:56:49         Author:orange

Json Formatter

I've been using np.tensordot in the past without any problems, but my current example, I struggle to understand the result.

For np.tensordot(d * y, r, axes=((1, 2, 3), (2, 3, 4))).shape, I would expect a shape of (6, 5), but instead, I get (6, 6, 5). When I would run tensordot 6 times on axis0, I would however get the expected result, but I'd rather have tensordot do this for me in one call. What's wrong with this?

>>> import numpy as np
>>> d = np.random.rand(6, 7, 1, 2)
>>> y = np.random.rand(6, 7, 1, 2)
>>> r = np.random.rand(6, 5, 7, 1, 2) > 0.5
>>> 
>>> np.tensordot(d * y, r, axes=((1, 2, 3), (2, 3, 4))).shape
(6, 6, 5)
>>> np.tensordot((d * y)[0], r[0], axes=((0, 1, 2), (1, 2, 3))).shape
(5,)
>>> np.tensordot((d * y)[1], r[1], axes=((0, 1, 2), (1, 2, 3))).shape
(5,)
>>> np.tensordot((d * y)[2], r[2], axes=((0, 1, 2), (1, 2, 3))).shape
(5,)
...
>>> np.tensordot((d * y)[5], r[5], axes=((0, 1, 2), (1, 2, 3))).shape
(5,)

Author:orange,eproduced under the CC 4.0 BY-SA copyright license with a link to the original source and this disclaimer.
Link to original article:https://stackoverflow.com/questions/45227793/leaving-dimension-unaffected-with-numpy-tensordot
hpaulj :

Consider a simpler case:\n\nIn [709]: d=np.ones((6,2));\nIn [710]: np.tensordot(d,d,axes=(1,1)).shape\nOut[710]: (6, 6)\n\n\nThis is equivalent to:\n\nIn [712]: np.einsum('ij,kj->ik',d,d).shape\nOut[712]: (6, 6)\n\n\nThis isn't ij,ij->i. It's a outer product on the unlisted axes, not an element by element one. \n\nYou have (6, 7, 1, 2) and (6, 5, 7, 1, 2), and want to sum on (7,1,2). It's doing an outer product on the (6) and (6,5).\n\nnp.einsum('i...,ij...->ij',d,r) would do, I think, what you want.\n\nUnder the covers, tensordot reshapes and swaps axes so that the problem becomes a 2d np.dot call. Then it reshapes and swaps back as needed.\n\n\n\nCorrection; I can't use ellipsis for the 'dotted' dimensions\n\nIn [726]: np.einsum('aijk,abijk->ab',d,r).shape\nOut[726]: (6, 5)\n\n\nand method:\n\nIn [729]: (d[:,None,...]*r).sum(axis=(2,3,4)).shape\nOut[729]: (6, 5)\n\n\ntimings\n\nIn [734]: timeit [np.tensordot(d[i],r[i], axes=((0, 1, 2), (1, 2, 3))) for i in \n ...: range(6)]\n145 µs ± 514 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)\nIn [735]: timeit np.einsum('aijk,abijk->ab',d,r)\n7.22 µs ± 34.8 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)\nIn [736]: timeit (d[:,None,...]*r).sum(axis=(2,3,4))\n16.6 µs ± 84.1 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)\n\n\nAnother solution, using the @ (matmul) operator\n\nIn [747]: timeit np.squeeze(d.reshape(6,1,14)@r.reshape(6,5,14).transpose(0,2,1))\n11.4 µs ± 28.1 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)\n",
2017-07-21T02:12:53
yy