mshadow | Namespace for mshadow |
expr | Namespace for abstract expressions and expressions template, have no dependecy on tensor.h, These data structure takes no charge in computations, they are only used to define operations and represent expression in a symbolic way |
type | Type of expressions |
ExpEngine | Expression engine that actually interprets these expressions this is a function template that needed to be implemented for specific expressions |
ContainerExp | Base class of all variables, that can be assigned to values |
Exp | Base class for expression |
ScalarExp | Scalar expression |
TransposeExp | Represent a transpose expression of a container |
DotExp | Matrix multiplication expression dot( lhs[.T], rhs[.T] ) |
BinaryMapExp | Binary map expression lhs [op] rhs |
UnaryMapExp | Unary map expression op(src) |
MakeTensorExp | General class that allows extension that makes tensors of some shape |
Plan | This part of code gives plan that can be used to carry out execution |
Plan< Tensor< Device, dim > > | |
Plan< Tensor< Device, 1 > > | |
Plan< ScalarExp > | |
Plan< BinaryMapExp< OP, TA, TB, etype > > | |
Plan< UnaryMapExp< OP, TA, etype > > | |
Plan< MakeTensorExp< SubType, SrcExp, dim > > | |
ExpInfo | Static type inference template, used to get the dimension of each expression, if ExpInfo<E>::kDim == -1, this means here are mismatch in expression if ( ExpInfo<E>::kDevMask & cpu::kDevMask ) != 0, this means this expression can be assigned to cpu |
ExpInfo< ScalarExp > | |
ExpInfo< Tensor< Device, dim > > | |
ExpInfo< MakeTensorExp< T, SrcExp, dim > > | |
ExpInfo< UnaryMapExp< OP, TA, etype > > | |
ExpInfo< BinaryMapExp< OP, TA, TB, etype > > | |
TypeCheck | Template to do type check |
TypeCheckPass | |
TypeCheckPass< false > | |
TypeCheckPass< true > | |
ShapeCheck | |
ShapeCheck< dim, ScalarExp > | |
ShapeCheck< dim, Tensor< Device, dim > > | |
ShapeCheck< dim, MakeTensorExp< T, SrcExp, dim > > | |
ShapeCheck< dim, UnaryMapExp< OP, TA, etype > > | |
ShapeCheck< dim, BinaryMapExp< OP, TA, TB, etype > > | |
DotEngine | |
BLASEngine | |
BLASEngine< cpu > | |
BLASEngine< gpu > | |
DotEngine< SV, xpu, 2, 2, 2, transpose_left, transpose_right > | |
DotEngine< SV, xpu, 1, 1, 2, false, transpose_right > | |
DotEngine< SV, xpu, 2, 1, 1, true, false > | |
ExpComplexEngine | Some engine that evaluate complex expression |
ExpEngine< SV, Tensor< Device, dim > > | |
ExpComplexEngine< SV, Device, dim, DotExp< Tensor< Device, ldim >, Tensor< Device, rdim >, ltrans, rtrans > > | |
Broadcast1DExp | Broadcast Tensor1D into a higher dimension Tensor input: Tensor<Device,1>: ishape[0] output: Tensor<Device,dimdst> : oshape[dimcast] = ishape[0] |
UnpackPatchToColXExp | Unpack local (overlap) patches of image to column of mat, can be used to implement convolution, this expression allow unpack of a batch this is a version support unpacking multiple images after getting unpacked mat, we can use: output = dot( weight, mat ) to get covolved results, the relations: |
PackColToPatchXExp | Reverse operation of UnpackPatchToCol, used to backprop gradient back this is a version supporting multiple images |
ReshapeExp | Reshape the content to another shape input: Tensor<Device,dimsrc>: ishape output: Tensor<Device,dimdst> ishape.Size() == oshape.Size() |
SwapAxisExp | Swap two axis of a tensor input: Tensor<Device,dim>: ishape output: Tensor<Device,dimdst> oshape[a1],oshape[a2] = ishape[a2],oshape[a1] |
ReduceTo1DExp | Reduction to 1 dimension tensor input: Tensor<Device,k>: ishape output: Tensor<Device,1> shape[0] = ishape[dimkeep]; |
PoolingExp | Pooling expression, do reduction over local patches of a image |
UnPoolingExp | Unpooling expr reverse operation of pooling, used to pass gradient back |
PaddingExp | Padding expression, pad a image with zeros |
CroppingExp | Crop expression, cut off the boundary region, reverse operation of padding |
MirroringExp | Mirror expression, mirror a image in width |
ChannelPoolingExp | Channel pooling expression, do reduction over (local nearby) channels, used to implement local response normalization |
ExpComplexEngine< SV, Device, 1, ReduceTo1DExp< EType, Reducer, dimkeep > > | |
ExpComplexEngine< SV, Device, 1, ReduceTo1DExp< EType, Reducer, 0 > > | |
Plan< Broadcast1DExp< Device, dimdst, dimcast > > | Execution plan of Broadcast1DExp |
Plan< Broadcast1DExp< Device, dimdst, 0 > > | Execution plan of Broadcast1DExp |
Plan< UnpackPatchToColXExp< SrcExp, srcdim > > | |
Plan< PackColToPatchXExp< Device, dstdim > > | |
Plan< ReshapeExp< SrcExp, dimdst, dimsrc > > | |
Plan< ReshapeExp< SrcExp, dimdst, 1 > > | |
Plan< SwapAxisExp< SrcExp, dimsrc, a1, a2 > > | |
Plan< SwapAxisExp< SrcExp, dimsrc, 0, a2 > > | |
Plan< PoolingExp< Reducer, SrcExp, srcdim > > | |
Plan< UnPoolingExp< Reducer, Device > > | |
Plan< PaddingExp< SrcExp, srcdim > > | |
Plan< CroppingExp< SrcExp, srcdim > > | |
Plan< MirroringExp< SrcExp, srcdim > > | |
Plan< ChannelPoolingExp< Reducer, SrcExp, srcdim > > | |
SSECheck< Broadcast1DExp< cpu, dimdst, 0 > > | |
SSEAlignCheck< 2, Broadcast1DExp< cpu, dimdst, 0 > > | |
SSEPlan< Broadcast1DExp< cpu, dimdst, 0 > > | |
SSEPlan | |
SSEPlan< Tensor< Device, dim > > | |
SSEPlan< ScalarExp > | |
SSEPlan< BinaryMapExp< OP, TA, TB, etype > > | |
SSEPlan< UnaryMapExp< OP, TA, etype > > | |
SSECheck | Static check sse enable if a expression E can not be evaluated using sse, then kPass = false |
SSECheck< ScalarExp > | |
SSECheck< Tensor< cpu, dim > > | |
SSECheck< UnaryMapExp< OP, TA, etype > > | |
SSECheck< BinaryMapExp< OP, TA, TB, etype > > | |
SSEAlignCheck | |
SSEAlignCheck< dim, ScalarExp > | |
SSEAlignCheck< dim, Tensor< cpu, dim > > | |
SSEAlignCheck< dim, UnaryMapExp< OP, TA, etype > > | |
SSEAlignCheck< dim, BinaryMapExp< OP, TA, TB, etype > > | |
op | Operations for algorithm |
sigmoid | |
sigmoid_grad | |
relu | Rectified Linear Operation |
relu_grad | |
tanh | |
tanh_grad | |
softplus | |
softplus_grad | |
bnll | |
bnll_grad | |
square | |
stanh | Scaled tanh, hard code the scale factor |
stanh_grad | |
threshold | Used for generate Bernoulli mask |
power | Used for generate element of power |
sqrtop | |
mul | Mul operator |
plus | Plus operator |
minus | Minus operator |
div | Divide operator |
right | Get rhs |
identity | Identity function that maps a real number to it self |
red | Namespace for potential reducer operations |
sum | Sum reducer |
maximum | Maximum reducer |
sse2 | Namespace to support sse2 vectorization |
FVec | Float vector real type, used for vectorization |
FVec< float > | Vector real type for float |
FVec< double > | Vector real type for float |
SSEOp | Sse2 operator type of certain operator |
SSEOp< op::plus > | |
SSEOp< op::minus > | |
SSEOp< op::mul > | |
SSEOp< op::div > | |
SSEOp< op::identity > | |
Saver | |
Saver< sv::saveto, TFloat > | |
sv | Namespace for savers |
saveto | Save to saver: = |
plusto | Save to saver: += |
minusto | Minus to saver: -= |
multo | Multiply to saver: *= |
divto | Divide to saver: /= |
utils | Namespace for helper utils of the project |
IStream | Interface of stream I/O, used to serialize data, it is not restricted to only this interface in SaveBinary/LoadBinary mshadow accept all class that implements Read and Write |
FileStream | Implementation of file i/o stream |
Shape | Shape of a tensor IMPORTANT NOTE: this shape is different from numpy.shape shape[0] gives the lowest dimension, shape[dimension-1] gives the highest dimension shape[k] corresponds to k-th dimension of tensor |
cpu | Device name CPU |
gpu | Device name CPU |
Tensor | General tensor |
Tensor< Device, 1 > | |
TensorContainer | Tensor container that does memory allocation and resize like STL, use it to save the lines of FreeSpace in class. Do not abuse it, efficiency can come from pre-allocation and no re-allocation |
MapExpCPUEngine | |
MapExpCPUEngine< false, SV, dim, E, etype > | |
MapExpCPUEngine< true, SV, dim, E, etype > | |
Random | Random number generator |
Random< cpu > | CPU random number generator |
singa | The code is adapted from that of Caffe whose license is attached |
Msg | Msg used to transfer Param info (gradient or value), feature blob, etc between workers, stubs and servers |
SocketInterface | |
Poller | |
Dealer | |
Router | |
Driver | |
BridgeLayer | |
BridgeDstLayer | For recv data from layer on other threads which may resident on other nodes due to layer/data partiton |
BridgeSrcLayer | For sending data to layer on other threads which may resident on other nodes due to layer/data partition |
ConcateLayer | Connect multiple (src) layers with a single (dst) layer |
SliceLayer | Connect a single (src) layer with multiple (dst) layers |
SplitLayer | Connect a single (src) layer with multiple dst layers |
DataLayer | Base layer for reading records from local Shard, HDFS, lmdb, etc |
ShardDataLayer | Layer for loading Record from DataShard |
ParserLayer | Base layer for parsing the input records into Blobs |
LabelLayer | Derived from ParserLayer to parse label from SingaleLabelImageRecord |
MnistLayer | Derived from ParserLayer to parse MNIST feature from SingaleLabelImageRecord |
RGBImageLayer | Derived from ParserLayer to parse RGB image feature from SingaleLabelImageRecord |
PrefetchLayer | Layer for prefetching data records and parsing them |
Layer | Base layer class |
ConnectionLayer | Base layer for connecting layers when neural net is partitioned |
InputLayer | Base layer for getting input data |
NeuronLayer | |
LossLayer | Base layer for calculating loss and other metrics, e.g., precison |
EuclideanLossLayer | Squared Euclidean loss as 0.5 ||predict - ground_truth||^2 |
SoftmaxLossLayer | Cross-entropy loss applied to the probabilities after Softmax |
NeuralNet | The neural network is constructed from user configurations in NetProto |
ConvolutionLayer | Convolution layer |
CConvolutionLayer | Use im2col from Caffe |
DropoutLayer | |
LRNLayer | Local Response Normalization edge |
PoolingLayer | |
CPoolingLayer | Use book-keeping for BP following Caffe's pooling implementation |
ReLULayer | |
InnerProductLayer | |
STanhLayer | This layer apply scaled Tan function to neuron activations |
SigmoidLayer | This layer apply Sigmoid function to neuron activations |
RBMLayer | Base layer for RBM models |
RBMVisLayer | RBM visible layer |
RBMHidLayer | RBM hidden layer |
Server | |
Trainer | Every running process has a training object which launches one or more worker (and server) threads |
Worker | Which runs the training algorithm |
BPWorker | |
CDWorker | |
SyncedMemory | Manages memory allocation and synchronization between the host (CPU) and device (GPU) |
Blob | |
Cluster | Cluster is a singleton object, which provides cluster configuations, e.g., the topology of the cluster |
RTCallback | |
JobInfo | |
ZKService | |
ClusterRuntime | ClusterRuntime is a runtime service that manages dynamic configuration and status of the whole cluster |
JobManager | |
Metric | Performance mtrics |
DataShard | Data shard stores training/validation/test tuples |
Node | |
Graph | Neuralnet is constructed by creating a graph with each node representing one layer at first |
ParamGenerator | Base parameter generator which intializes parameter values |
GaussianGen | |
GaussianSqrtFanInGen | |
UniformGen | |
UniformSqrtFanInGen | |
UniformSqrtFanInOutGen | |
Param | Base paramter class |
ParamEntry | ParamEntry is used for aggregating gradients of Params shared by workers from the same group |
LRGenerator | Base learning rate generator |
FixedStepLRGen | |
StepLRGen | |
LinearLRGen | |
ExpLRGen | |
InvLRGen | |
InvTLRGen | |
Updater | Updater for Param |
SGDUpdater | |
AdaGradUpdater | |
NesterovUpdater | |
std | |
tr1 | |
gtest_internal | |
ByRef | |
ByRef< T & > | |
AddRef | |
AddRef< T & > | |
Get | |
TupleElement | |
TupleElement< true, 0, GTEST_10_TUPLE_(T) > | |
TupleElement< true, 1, GTEST_10_TUPLE_(T) > | |
TupleElement< true, 2, GTEST_10_TUPLE_(T) > | |
TupleElement< true, 3, GTEST_10_TUPLE_(T) > | |
TupleElement< true, 4, GTEST_10_TUPLE_(T) > | |
TupleElement< true, 5, GTEST_10_TUPLE_(T) > | |
TupleElement< true, 6, GTEST_10_TUPLE_(T) > | |
TupleElement< true, 7, GTEST_10_TUPLE_(T) > | |
TupleElement< true, 8, GTEST_10_TUPLE_(T) > | |
TupleElement< true, 9, GTEST_10_TUPLE_(T) > | |
Get< 0 > | |
Get< 1 > | |
Get< 2 > | |
Get< 3 > | |
Get< 4 > | |
Get< 5 > | |
Get< 6 > | |
Get< 7 > | |
Get< 8 > | |
Get< 9 > | |
SameSizeTuplePrefixComparator | |
SameSizeTuplePrefixComparator< 0, 0 > | |
SameSizeTuplePrefixComparator< k, k > | |
tuple | |
tuple<> | |
tuple_size | |
tuple_size< GTEST_0_TUPLE_(T) > | |
tuple_size< GTEST_1_TUPLE_(T) > | |
tuple_size< GTEST_2_TUPLE_(T) > | |
tuple_size< GTEST_3_TUPLE_(T) > | |
tuple_size< GTEST_4_TUPLE_(T) > | |
tuple_size< GTEST_5_TUPLE_(T) > | |
tuple_size< GTEST_6_TUPLE_(T) > | |
tuple_size< GTEST_7_TUPLE_(T) > | |
tuple_size< GTEST_8_TUPLE_(T) > | |
tuple_size< GTEST_9_TUPLE_(T) > | |
tuple_size< GTEST_10_TUPLE_(T) > | |
tuple_element | |
testing | |
internal | |
SingleFailureChecker | |
GTestFlagSaver | |
TestPropertyKeyIs | |
UnitTestOptions | |
OsStackTraceGetterInterface | |
OsStackTraceGetter | |
TraceInfo | |
DefaultGlobalTestPartResultReporter | |
DefaultPerThreadTestPartResultReporter | |
UnitTestImpl | |
TestResultAccessor | |
PrettyUnitTestResultPrinter | |
TestEventRepeater | |
XmlUnitTestResultPrinter | |
ScopedPrematureExitFile | |
TestCaseNameIs | |
CompileAssert | |
StaticAssertTypeEqHelper | |
StaticAssertTypeEqHelper< T, T > | |
scoped_ptr | |
RE | |
GTestLog | |
Mutex | |
GTestMutexLock | |
ThreadLocal | |
bool_constant | |
is_pointer | |
is_pointer< T * > | |
IteratorTraits | |
IteratorTraits< T * > | |
IteratorTraits< const T * > | |
TypeWithSize | |
TypeWithSize< 4 > | |
TypeWithSize< 8 > | |
String | |
FilePath | |
ScopedTrace | |
FloatingPoint | |
TypeIdHelper | |
TestFactoryBase | |
TestFactoryImpl | |
ConstCharPtr | |
Random | |
CompileAssertTypesEqual | |
CompileAssertTypesEqual< T, T > | |
RemoveReference | |
RemoveReference< T & > | |
RemoveConst | |
RemoveConst< const T > | |
RemoveConst< const T[N]> | |
AddReference | |
AddReference< T & > | |
ImplicitlyConvertible | |
IsAProtocolMessage | |
EnableIf | |
EnableIf< true > | |
NativeArray | |
linked_ptr_internal | |
linked_ptr | |
UniversalPrinter | |
UniversalPrinter< T[N]> | |
UniversalPrinter< T & > | |
UniversalTersePrinter | |
UniversalTersePrinter< T & > | |
UniversalTersePrinter< T[N]> | |
UniversalTersePrinter< const char * > | |
UniversalTersePrinter< char * > | |
UniversalTersePrinter< const wchar_t * > | |
UniversalTersePrinter< wchar_t * > | |
TuplePrefixPrinter | |
TuplePrefixPrinter< 0 > | |
TuplePrefixPrinter< 1 > | |
ParamGeneratorInterface | |
ParamGenerator | |
ParamIteratorInterface | |
ParamIterator | |
RangeGenerator | |
ValuesInIteratorRangeGenerator | |
ParameterizedTestFactory | |
TestMetaFactoryBase | |
TestMetaFactory | |
ParameterizedTestCaseInfoBase | |
ParameterizedTestCaseInfo | |
ParameterizedTestCaseRegistry | |
ValueArray1 | |
ValueArray2 | |
ValueArray3 | |
ValueArray4 | |
ValueArray5 | |
ValueArray6 | |
ValueArray7 | |
ValueArray8 | |
ValueArray9 | |
ValueArray10 | |
ValueArray11 | |
ValueArray12 | |
ValueArray13 | |
ValueArray14 | |
ValueArray15 | |
ValueArray16 | |
ValueArray17 | |
ValueArray18 | |
ValueArray19 | |
ValueArray20 | |
ValueArray21 | |
ValueArray22 | |
ValueArray23 | |
ValueArray24 | |
ValueArray25 | |
ValueArray26 | |
ValueArray27 | |
ValueArray28 | |
ValueArray29 | |
ValueArray30 | |
ValueArray31 | |
ValueArray32 | |
ValueArray33 | |
ValueArray34 | |
ValueArray35 | |
ValueArray36 | |
ValueArray37 | |
ValueArray38 | |
ValueArray39 | |
ValueArray40 | |
ValueArray41 | |
ValueArray42 | |
ValueArray43 | |
ValueArray44 | |
ValueArray45 | |
ValueArray46 | |
ValueArray47 | |
ValueArray48 | |
ValueArray49 | |
ValueArray50 | |
CartesianProductGenerator2 | |
CartesianProductGenerator3 | |
CartesianProductGenerator4 | |
CartesianProductGenerator5 | |
CartesianProductGenerator6 | |
CartesianProductGenerator7 | |
CartesianProductGenerator8 | |
CartesianProductGenerator9 | |
CartesianProductGenerator10 | |
CartesianProductHolder2 | |
CartesianProductHolder3 | |
CartesianProductHolder4 | |
CartesianProductHolder5 | |
CartesianProductHolder6 | |
CartesianProductHolder7 | |
CartesianProductHolder8 | |
CartesianProductHolder9 | |
CartesianProductHolder10 | |
HasNewFatalFailureHelper | |
FormatForComparison | |
FormatForComparison< ToPrint[N], OtherOperand > | |
EqHelper | |
EqHelper< true > | |
AssertHelper | |
internal2 | |
TypeWithoutFormatter | |
TypeWithoutFormatter< T, kProtobuf > | |
TypeWithoutFormatter< T, kConvertibleToInteger > | |
ScopedFakeTestPartResultReporter | |
Message | |
TestPartResult | |
TestPartResultArray | |
TestPartResultReporterInterface | |
AssertionResult | |
Test | |
TestProperty | |
TestResult | |
TestInfo | |
TestCase | |
Environment | |
TestEventListener | |
EmptyTestEventListener | |
TestEventListeners | |
UnitTest | |
WithParamInterface | |
TestWithParam | |
Factory | Factory template to generate class (or a sub-class) object based on id |
Singleton | Thread-safe implementation for C++11 according to http://stackoverflow.com/questions/2576022/efficient-thread-safe-singleton-in-c |
tinydir_dir | |
tinydir_file | Defined(_TINYDIR_MALLOC) |
TSingleton | Thread Specific Singleton |