Are you happy with your logging solution? Would you help us out by taking a 30-second survey? Click here

Theano

Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It can use GPUs and perform efficient symbolic differentiation.

Subscribe to updates I use Theano


Statistics on Theano

Number of watchers on Github 8004
Number of open issues 632
Average time to close an issue 4 days
Main language Python
Average time to merge a PR 3 days
Open pull requests 308+
Closed pull requests 74+
Last commit over 1 year ago
Repo Created about 8 years ago
Repo Last Updated over 1 year ago
Size 64.8 MB
Homepage http://www.deeple...
Organization / Authortheano
Contributors196
Page Updated
Do you use Theano? Leave a review!
View open issues (632)
View Theano activity
View on github
Fresh, new opensource launches πŸš€πŸš€πŸš€
Trendy new open source projects in your inbox! View examples

Subscribe to our mailing list

Evaluating Theano for your project? Score Explanation
Commits Score (?)
Issues & PR Score (?)

============================================================================================================

MILA will stop developing Theano <https://groups.google.com/d/msg/theano-users/7Poq8BZutbY/rNCIfvAEAwAJ>_.

To install the package, see this page: http://deeplearning.net/software/theano/install.html

For the documentation, see the project website: http://deeplearning.net/software/theano/

Related Projects: https://github.com/Theano/Theano/wiki/Related-projects

It is recommended that you look at the documentation on the website, as it will be more current than the documentation included with the package.

In order to build the documentation yourself, you will need sphinx. Issue the following command:

::

python ./doc/scripts/docgen.py

Documentation is built into html/

The PDF of the documentation can be found at html/theano.pdf

================

DIRECTORY LAYOUT

Theano (current directory) is the distribution directory.

  • Theano/theano contains the package
  • Theano/theano has several submodules:

    • gof + compile are the core
    • scalar depends upon core
    • tensor depends upon scalar
    • sparse depends upon tensor
    • sandbox can depend on everything else
  • Theano/examples are copies of the example found on the wiki

  • Theano/benchmark and Theano/examples are in the distribution, but not in the Python package

  • Theano/bin contains executable scripts that are copied to the bin folder when the Python package is installed

  • Tests are distributed and are part of the package, i.e. fall in the appropriate submodules

  • Theano/doc contains files and scripts used to generate the documentation

  • Theano/html is where the documentation will be generated

Theano open issues Ask a question     (View All Issues)
  • almost 3 years doc prepare_node()
  • almost 3 years BadOptimization when using gpuarray backend in DebugMode
  • almost 3 years RuntimeError on my Ubuntu16.04 with CUDA.8.0 and cuDNN5.1
  • almost 3 years Retry theano import or check for free GPU
  • almost 3 years Anaconda/Windows documentation is out of date.
  • almost 3 years New lift optimization pass
  • almost 3 years theano.tensor.extra_ops.cumprod gradient handle zero incorrectly
  • almost 3 years Theano crashes when Variable 'owner' attribute is not defined
  • almost 3 years Add support of bool indexing in Theano
  • almost 3 years Port cusolver GpuSolve to new back-end
  • almost 3 years Creating abstract ops for batch_normalization_train and batch_normalization_test
  • almost 3 years Better document config.gpuarray.preallocate
  • almost 3 years Problem with spaces in path (Theano 0.9.0.dev4 / Windows 10 Machine, 64-bit)
  • almost 3 years CnMem inconsistent behavior causes ALLOC_FAILED
  • almost 3 years Print the GPU PCI device
  • almost 3 years gpuarray.test_dnn failures in float16
  • almost 3 years Deal with degenerate Booleans (and tests)
  • almost 3 years pipelining scan
  • almost 3 years test_conv3d_bwd failing
  • almost 3 years local_opt_alloc fails to cast its replacement to the original type
  • almost 3 years GPU available memory reported incorrectly under Windows 7 64 bit OS
  • almost 3 years apply_node.fgraph and var.fgraph should be deleted when they are removed from FunctionGraph
  • almost 3 years Adding Copy to SparseVariable
  • almost 3 years Compiling failure with latest developer version
  • almost 3 years stable norm computation
  • almost 3 years Optimization failure when adding result of set_subtensor on GPU
  • almost 3 years The SoftmaxGrad opt doesn't check the Elemwise type.
  • almost 3 years theano.tensor.take does not allow axis=-1 on GPU with optimization
  • almost 3 years Make signal.conv2d build an AbstractConv2d Op
  • almost 3 years Document bidirectional RNNs from cuDNN
Theano open pull requests (View All Pulls)
  • fix potential nan
  • cuDNN header / library paths fix
  • Fix the cnmem print percent message
  • better stack trace handling.
  • Don't call eval
  • Add doc for opt.py
  • [WIP] Convolution shape tutorial
  • GpuBatchedDot: streams implementation (WIP)
  • Also lift Dot22Scalar.
  • fixed rng_mrg int32 overflow, just throw out the error when it overflows
  • test using theano tile added
  • Add slow implementation for Abstract Conv2D Ops #3598
  • tip for mixed dtypes
  • using _props_dict() in gnu optimisation , issue #3745
  • [WIP] Compiled c instances reduction by adding the destroy map as a Param to Elemwise Op
  • Kmap
  • Move the AbstractConv tests with the implementation
  • Add some developper documentation for Scan
  • Assert to never reintroduce Apply node in fgraph
  • GPU triangular solve op
  • GPU Cholesky decomposition op
  • extension of #3983
  • Ignore division by zero in expmgrad when numerator is zero.
  • Changes to GpuSolve op to improve compatibility with slingalg Solve op
  • keep stack trace in optimizations of nnet folder
  • Added required __future__ imports
  • h_softmax: removed three redundant parameters
  • gpu dnn pool takes tensor variables
  • LogAbsDet
  • Bilinear interpolation
  • Fix typo and clarify CNMeM config documentation
  • Fix typo and clarify cuDNN config documentation
  • flake8 misc/tests/*.py
  • Check RST files in '/theano/doc/' to follow numpy's docstring
  • Catch scipy.sigtools._convolve2d complex warning
  • add docs and deprecation warning.
  • Fix nan handling in output buffer for CGemv
  • [OPT FAILURE] Fix gh-4131
  • Gradient implementation for Solve op
  • Opt
  • Make the python code for Gemv also check for the need to initialize the output with beta=0
  • Symbolic Cholesky gradient implementation
  • Improve code misleading tiny issues for Loop of Tutorial.
  • Test files have been modified in order to respect the flake8 style.
  • [ENH] faster opt by changing call to extract_constant and get_scalar_constant_value
  • flake8 compile/tests/*.py
  • Added confusion matrix op, issue #3637
  • GpuAdvancedSubtensor created
  • minor edits to docstrings; all files pass flake8
  • Added cuDNN v5 support & Added optional dependencies to setup.py
  • fix issue 4303 : reshaping part
  • Add broadcastable dimensions in opt where needed
  • Check RST files in '/theano/doc/' to follow numpy's docstring + Fix elemwise
  • Faster Cycle detection
  • fix gh-4328 This was done for python3, but not python2.
  • Moved theano.sandbox.softsign to tensor/nnet/nnet.py. Added test. #4314
  • Use the new GpuElemwise from libgpuarray
  • Prevents subprocess from importing Theano
  • [WIP] Helper function that check stack traces
  • refactored GpuJoin c_code to not hit nested block limit of MSVC
  • Ccw 4055
  • Partial function evaluation
  • [WIP] Scan with checkpoint
  • sumsqr2dot added to the opt.py
  • Better error message
  • Added support for keepdims in norm function
  • flake8 sandbox/cuda/*.py
  • Implementation of Batch Normalization from cuDNN v.4
  • added function to look for working march
  • flake8 tensor/nnet/test*.py
  • Negative indices for `addbroadcast` and `unbroadcast`
  • deterministic flag
  • [WIP] Documentation Refractor
  • add print_test_value to debug tutorial
  • Added the polygamma function, implemented digamma gradient
  • Scan reintroduced
  • Scan reintroduced benchmark
  • GpuSolve based on CUSOLVER instead of CULA
  • Move new GPU backend out of sandbox
  • keep stack trace: fix optimizations or tests (tensor/tests/test_opt.py)
  • Compatibility between both GPU back-end and doc.
  • CCW 4283
  • gpu diagonal implementation
  • Ccw3398
  • Small stuff
  • Implement Rop for Pool(mode='max')
  • cudnn slowdown on new back-end
  • Convert NanGuardMode to use the VM linker instead of WrapLinker.
  • GpuAdvancedSubtensor
  • reshape to 1 is replace with dimshuffle
  • Useless alloc
  • Keep stack trace: finish local opt in theano/tensor/opt.py
  • [WIP] Use check_stack_trace helper function in tensor/nnet/tests/
  • #2801:subtensor-incsubtensor
  • Add dev doc switcher
  • Update doc with instructions for using new gpu backend
  • Don't rebuild inplace add kernels all the time for GpuIncSubtensor.
  • Added Global Opt to transfer Graph to GPU
  • Make BN graph smaller
  • Port FFTs to gpuarray backend
  • Add tensorsolve to theano.tensor.nlinalg. Test.
  • Opt related changes.
  • ones_like and zeros_like dtype parameter is documented
  • Update the versions tested on travis
  • Ccw 4483 indent fix
  • Single stream flag
  • fix numpy 1.11 deprecationwarnings
  • useless dimshuffle in reshape is removed
  • Advanced replace subtensor with dimshuffle, resolves issue 4562
  • Change documentation theme to readthedocs and add source links
  • Ccw4057
  • corr_gemm optimization to improve CNN performance
  • Implementation of 2D dilated convolution/correlation.
  • Add some documentation on how to write gpu ops.
  • cuDNN v5 Batch Normalization
  • New graph2gpu
  • Fix inc/set_subtensor when indexing with one non-vector
  • Clean up
  • WIP: partial evaluation in CVM
  • Merged previous work implementing stack trace copy over and tests for…
  • keep stack trace for FusionOptimizer
  • [WIP] add local_lift_reshape optimization, closes #4641, closes #4451
  • yield test in test_abstract_conv for tensor.nnet, sandbox.cuda and gpuarray
  • add new GPU backend information to GPU data convert tutorial
  • sample multinomial without replacement - GPU
  • Reduce C code generated when using GpuDnnConvDesc
  • WIP: Theano variable arguments for pooling
  • Add gpu_contiguous for batch normalization.
  • GpuArray BatchNorm
  • Allow indexing using Ellipsis as in numpy.
  • [WIP] Reduce C code generated when using GpuDnnConvDesc
  • Gpuadvsub
  • WIP: Theano variable arguments for pooling 2
  • Raising error for Indexing subtensor with a Boolean mask
  • Adding __props__ to all the Ops that has __init__
  • Fix a test failure in GpuCumsum class
  • Log sum exp optimization for numerical stability
  • Self caching
  • GetItem2d.grad implemented, fix for #3243
  • Improve tensor var
  • Move shared variables of graph to specific device
  • Added tests for conv2d_grad_* methods
  • Test jenkins org
  • Remove ShapeOptimizer from FAST_COMPILE
  • Uses corrected two pass algorithm in theano.tensor.var
  • Move sandbox.cuda test to 2nd jenkins job
  • Full 3D convolution for conv3d2d
  • Fix typos and other doc/comment issues, add .idea to .gitignore, and …
  • Confusion matrix
  • [WIP] Raising error for Indexing subtensor with a Boolean mask
  • Use shift modulo axis size to handle large shifts in theano.tensor.roll
  • Resolve conflicts Rop implementation for Pool(mode='max')
  • Fix NaNGuardMode to permit None variables.
  • Added the broadcastable flags to all mrg functions.
  • ccw4601
  • [WIP] Documentation Refactor 2
  • Add support for spaces in paths
  • Fix batchnorm error in buildbot.
  • theano.tensor.signal.Pool with 3D support
  • Use pip3 to install for python3 on ubuntu 14.04
  • N2 fast destroy
  • Cgt opt
  • Fix abstractconv_grad dilation and enable border_mode tests
  • Adding an AbstractConv3d interface
  • Use randint() instead of random_integers() in the tests.
  • add ddof to var and std, apply Fred-fix to opt
  • Half 3D convolution for conv3d2d
  • Remove nose-parameterized dependency from theano.sparse
  • Improve the example for strict=True in the scan() doc.
  • Fix deprecation
  • Fix d3viz multi-level/inheritance tags
  • Break opt
  • [WIP] PR for Issue #4647
  • Don't move complex to the GPU in the new back-end.
  • Add a wrapper function for kernels to simplify calling.
  • Drop old version
  • GpuSolve is now based on CUSOLVER instead of CULA
  • Cudnn RNN bindings.
  • fix conv2d_grad_wrt_weights bug
  • Frozendicts
  • fix softmaxgrad dnn opt error fixes #5056
  • GpuCorrMM and GpuCorr3dMM in new backend
  • Icc support
  • [WIP] Make the MergeOptimizer execution time independent of the number of clients
  • GpuAdvancedIncSubtensor1 supports mixed dtypes.
  • Remove ProfileMode
  • speed up io_toposort when there is no ordering.
  • Convert Github readme.txt to .rst
  • [WIP] Scan with Checkpoints (part 2)
  • Function.__call__(): correct indexing variable for group of args with…
  • GpuDnnConvDesc props to inputs: border_mode, subsample
  • Updating documentation on shared variables
  • Removing _op_use_c_code attribute
  • GpuDnnBatchNorm with 5d inputs
  • Pool 2d rename
  • remove from var.py
  • Deprecate ds, st, padding parameters in pooling
  • make signal.conv2d to build an abstractConv2d
  • Add profiling of which node make_thunk take times.
  • warn when profiling.ignore_first_call isn't used.
  • Speed up GpuCorrMM in the new back-end
  • Avoid linear search for removed nodes
  • By default, do validation during elemwise_inplace_optimizer approx 10…
  • Doc update about cudnn and cudnn RNN.
  • use floatX in gpuarray dnn tests
  • Add bool dtype in scalar and tensor.
  • Add a method to properly convert the graph of Composite to float32.
  • Issue 5008 fixed
  • AWS Marketplace AMI install doc section
  • AWS Marketplace AMI install doc section
  • Gpuarray average and max pooling
  • Abstract Ops for batch normalization
  • ROI Pooling Layer Op in CPU and GPU
  • Add grad_scale op
  • uint16 added into _good_broadcast_binary_normal
  • This is my proposal for GpuMaxAndArgmax (issue #1399).
  • GpuDownsampleFactorMaxGradGrad3d in cuda (+bug?)
  • Cuda pooling with strides
  • Dimshuffle{0,2}(Subtensor[i:j, :, k:l]) => Subtensor[i:j, 0, k:l] #4647
  • Select the dnn convolution algorithm using actually available memory.
  • GPU gemv->dot speedup for new backend
  • Jenkins release scripts
  • Cleanup numpy min version
  • Improve MissingInput message
  • Print profile stats
  • Close file as reading file done
  • Conda fixes
  • Remove isnan from the graph for discrete dtype.
  • temporary commit
  • Added optimization for product of exponentials with same base
  • Minor inconsistency in AbstractConv_gradInput implementations
  • Gpu argmax
  • [WIP] Upgraded OpFromGraph with inline support and gradient override
  • Numpy imports
  • Fix theano scripts on windows
  • This fixes a bug in cusolver solve op when m > k
  • as per #5409, fixing a typo
  • numpy 1.12.0rc2 compatibility fix related to variable typecasting
  • Switch gs and ls to follow libgpuarray.
  • Reduce C code generated when using GpuDnnConvDesc
  • Implement conv2d_transpose convenience function
  • Spatial pooling
  • softmax opti for upstream
  • matmul() operation
  • Backport of support for pydot-ng in 0.8.x
  • Broadcasting tutorial
  • [WIP] numpy style matmul
  • Pooling rop
  • Adding Copy to SparseVariable
  • Change hashlib.md5 to hashlib.sha256 for FIPS enabled systems
  • add magma to buildbots
  • Magma gpu QR/Cholesky/Eigh
  • Tests for the softmax modifications
  • Doc tradeoff comp run
  • crash fix and opt warning fix(make opt apply)
  • [BUG, CRASH] Fixes in DebugMode for GPU
  • WIP: Allow missing inputs when they aren't needed after optimization.
  • R_op for ZeroGrad + tests
  • Implementing `GpuAdvancedIncSubtensor`
  • Use ParamsType for other ops
  • Split elemwise addmul
  • Add covariance matrix function theano.tensor.cov
  • Faster topo
  • [WIP] OpFromGraph on GPU when not inline
  • Correct gradient of scan until.
  • Add MKL ndarray and MKL library check for the new MKL engine
  • Add support for atomic{Exch,Add} on long longs.
  • Index overflow fixes
  • Softmax improvement
  • WIP: gpuarray: keep stack trace
  • L_op for OpFromGraph
  • Max pool gradient fix
  • Mixed stuff
  • Fix arguments order in register_specify_shape_c_code of tensor/type.py
  • Add flag and implementation of build_infer_shape.
  • jenkinsfile for PR pipeline
  • Function that check the axes
  • Merge supervisor
  • Grouped Convolution
  • Wrap Op params for many gpuarray DNN Ops and add cuDNN v6 integration.
  • added polygamma
  • TopKOp implementation
  • Make version as dev version and better error.
  • [WIP] Baidu CTC wrapper
  • Added mode 'half' to Images2Neibs. Tests pass.
  • dockerfile for jenkins buildbot (work in progress)
  • Wrap Op params for theano.sandbox.rng_mrg.mrg_uniform
  • Add a Python script to help run more exhaustive cuDNN algorithms tests
  • Avoid AttributeError in `pp`.
  • don't move op that don't support float16 on the GPU
  • Fix broadcasting in sparse dot
  • Supports the NumPy pad function
  • Add Instance softmax that match cudnn instance mode
  • Fix PushOutputScan when using trucate_gradient.
  • Zoom from scipy.ndimage.interpolation
  • Rop_via_Lop Implementation
  • Gpuarray sort and argsort Op
  • Create axis as Param Type for Softmax considering the case where axis is an arbitrary scalar
  • CPU Spatial Transformer
  • [WIP] Compute the determinant of a matrix on the GPU
  • Create axis as Param Type for Softmax considering the case where axis=-1
  • Execution order
  • Gumbel softmax
  • [Intel MKL] implement high performance convolution OP
  • Extend DiffOp so it allows to take gradients of multidimensional inputs
  • use twice Lop to get Rop by default
  • Implemented Rop for Prod and ZeroGrad and added testing for generic Rop
  • Make Theano work under pypy (WIP)
Theano questions on Stackoverflow (View All Questions)
  • Theano - unexpected output from simple "RNN"
  • What does negative log likelihood of logistic regression in theano look like?
  • How to write update in theano function
  • How to map one matrix value to another in theano function
  • Theano: How to take a "matrix outer product" where the elements are matrices
  • Theano dmatrix contains newaxis raise dimension mismatch
  • Linear Regression Lasagne / Theano
  • Eclipse with pydev cant import theano
  • theano: conv3d2d error while doing 3d convolution
  • Python missing modules with Theano
  • Theano - Keras - No Module named `pool`
  • Theano: cublasSgemm failed (14) an internal operation failed
  • Theano's pkl_utils Dump Function not available in Theano 0.7?
  • ImportError: No module named theano
  • Theano takes a few minutes to start a script running on GPU
  • Using theano with single examples out of batch
  • In theano, making the matrix of slices from a vector
  • Combining scalars and vectors in Theano for computing Hessian
  • Installing Theano with GPU on Windows 8.1 64-bit with Visual Studio 2013
  • CPU (with direct Theano binding to blas) is slower?
  • Merging and training Theano autoencoders
  • Theano: How to implement the distance between desired output (1d) and label as cost function
  • Any way to combine Theano tensor vectors similar to itertools.chain?
  • column_stack equivalent in Theano
  • Theano: fast on CPU but keeps freezing
  • Does convolution in Theano rotate the filters?
  • Is there any way to cache the theano compiling result?
  • when do use borrow=True for theano shared variables?
  • Type-error in shared variable in theano
  • Theano TypeError
Theano list of languages used
More projects by Theano View all
Other projects in Python