本页面提供的是上一版软件的文档。当前版本中已删除对应的英文页面。
Deep Learning Toolbox 函数 - 按字母顺序排列的列表
A
AcceleratedFunction | Accelerated deep learning function (自 R2021a 起) |
accuracyMetric | Deep learning accuracy metric (自 R2023b 起) |
activations | 计算深度学习网络层激活 |
adamupdate | Update parameters using adaptive moment estimation (Adam) (自 R2019b 起) |
adapt | Adapt neural network to data as it is simulated |
adaptwb | Adapt network with weight and bias learning rules |
adddelay | Add delay to neural network response |
addInputLayer | Add input layer to network (自 R2022b 起) |
additionLayer | Addition layer |
addLayers | Add layers to layer graph or network |
addMetrics | Compute additional classification performance metrics (自 R2022b 起) |
addParameter | Add parameter to ONNXParameters object (自 R2020b 起) |
alexnet | AlexNet 卷积神经网络 |
analyzeNetwork | Analyze deep learning network architecture |
assembleNetwork | Assemble deep learning network from pretrained layers |
attention | Dot-product attention (自 R2022b 起) |
aucMetric | Deep learning area under ROC curve (AUC) metric (自 R2023b 起) |
audioDataAugmenter | Augment audio data (自 R2019b 起) |
audioDatastore | Datastore for collection of audio files |
audioFeatureExtractor | Streamline audio feature extraction (自 R2019b 起) |
augment | Apply identical random transformations to multiple images |
augmentedImageDatastore | 变换批量以增强图像数据 |
augmentedImageSource | (To be removed) Generate batches of augmented image data |
Autoencoder | Autoencoder class |
average | Compute performance metrics for average receiver operating characteristic (ROC) curve in multiclass problem (自 R2022b 起) |
averagePooling1dLayer | 1-D average pooling layer (自 R2021b 起) |
averagePooling2dLayer | Average pooling layer |
averagePooling3dLayer | 3-D average pooling layer (自 R2019a 起) |
avgpool | Pool data to average values over spatial dimensions (自 R2019b 起) |
B
BaselineDistributionDiscriminator | Baseline distribution discriminator (自 R2023a 起) |
batchnorm | Normalize data across all observations for each channel independently (自 R2019b 起) |
batchNormalizationLayer | Batch normalization layer |
bilstmLayer | Bidirectional long short-term memory (BiLSTM) layer for recurrent neural network (RNN) |
blockedImageDatastore | Datastore for use with blocks from blockedImage
objects (自 R2021a 起) |
boxdist | Distance between two position vectors |
boxLabelDatastore | Datastore for bounding box label data (自 R2019b 起) |
bttderiv | Backpropagation through time derivative function |
C
calibrate | Simulate and collect ranges of a deep neural network (自 R2020a 起) |
cascadeforwardnet | 生成级联前向神经网络 |
catelements | Concatenate neural network data elements |
catsamples | Concatenate neural network data samples |
catsignals | Concatenate neural network data signals |
cattimesteps | Concatenate neural network data timesteps |
cellmat | 创建由矩阵组成的元胞数组 |
cellpose | Configure Cellpose model for cell segmentation (自 R2023b 起) |
checkLayer | Check validity of custom or function layer |
classificationLayer | 分类输出层 |
ClassificationOutputLayer | Classification layer |
classify | Classify data using trained deep learning neural network |
classifyAndUpdateState | Classify data using a trained recurrent neural network and update the network state |
classifySound | Classify sounds in audio signal (自 R2020b 起) |
clearCache | Clear accelerated deep learning function trace cache (自 R2021a 起) |
clippedReluLayer | Clipped Rectified Linear Unit (ReLU) layer |
close | Close training information plot (自 R2023b 起) |
closeloop | Convert neural network open-loop feedback to closed loop |
codegen | 从 MATLAB 代码生成 C/C++ 代码。 |
coder.DeepLearningConfig | Create deep learning code generation configuration objects |
coder.getDeepLearningLayers | Get the list of layers supported for code generation for a specific deep learning library |
coder.loadDeepLearningNetwork | Load deep learning network model |
coder.loadNetworkDistributionDiscriminator | Load network distribution discriminator for code generation (自 R2023a 起) |
combine | 合并来自多个数据存储的数据 (自 R2019a 起) |
CombinedDatastore | 数据存储会合并从多个基础数据存储读取的数据 (自 R2019a 起) |
combvec | 创建向量的所有组合 |
compet | Competitive transfer function |
competlayer | Competitive layer |
compressNetworkUsingProjection | Compress neural network using projection (自 R2022b 起) |
con2seq | Convert concurrent vectors to sequential vectors |
concatenationLayer | Concatenation layer (自 R2019a 起) |
concur | Create concurrent bias vectors |
configure | Configure network inputs and outputs to best match input and target data |
confusion | 分类混淆矩阵 |
confusionchart | Create confusion matrix chart for classification problem |
confusionmat | Compute confusion matrix for classification problem |
connectLayers | Connect layers in layer graph or network |
convolution1dLayer | 1-D convolutional layer (自 R2021b 起) |
convolution2dLayer | 2-D convolutional layer |
convolution3dLayer | 3-D convolutional layer (自 R2019a 起) |
convwf | Convolution weight function |
countlabels | Count number of unique labels (自 R2021a 起) |
crepe | CREPE neural network (自 R2021a 起) |
crop2dLayer | 2-D crop layer |
crop3dLayer | 3-D crop layer (自 R2019b 起) |
crosschannelnorm | Cross channel square-normalize using local responses (自 R2020a 起) |
crossChannelNormalizationLayer | Channel-wise local response normalization layer |
crossentropy | Cross-entropy loss for classification tasks (自 R2019b 起) |
crossentropy | Neural network performance |
ctc | Connectionist temporal classification (CTC) loss for unaligned sequence classification (自 R2021a 起) |
cwtfilterbank | Continuous wavelet transform filter bank |
cwtLayer | Continuous wavelet transform (CWT) layer (自 R2022b 起) |
cwtmag2sig | Signal reconstruction from CWT magnitude (自 R2023b 起) |
D
DAGNetwork | 用于深度学习的有向无环图 (DAG) 网络 |
darknet19 | DarkNet-19 卷积神经网络 (自 R2020a 起) |
darknet53 | DarkNet-53 卷积神经网络 (自 R2020a 起) |
decode | Decode encoded data |
deepDreamImage | Visualize network features using deep dream |
deeplabv3plusLayers | Create DeepLab v3+ convolutional neural network for semantic image segmentation (自 R2019b 起) |
deepSignalAnomalyDetector | Create signal anomaly detector (自 R2023a 起) |
defaultderiv | Default derivative function |
densenet201 | DenseNet-201 卷积神经网络 |
depthConcatenationLayer | Depth concatenation layer |
detect | Detect objects using PointPillars object detector (自 R2021b 起) |
detectspeechnn | Detect boundaries of speech in audio signal using AI (自 R2023a 起) |
detectTextCRAFT | Detect texts in images by using CRAFT deep learning model (自 R2022a 起) |
dims | dlarray 的维度标签 (自 R2019b 起) |
disconnectLayers | Disconnect layers in layer graph or network |
dist | 欧几里德距离权重函数 |
distdelaynet | Distributed delay network |
distributionScores | Distribution confidence scores (自 R2023a 起) |
divideblock | 使用索引块将目标分为三个数据集 |
divideind | Divide targets into three sets using specified indices |
divideint | 使用交错索引将目标分为三组 |
dividerand | 使用随机索引将目标分为三组 |
dividetrain | 将所有目标分配给训练集 |
dlaccelerate | Accelerate deep learning function for custom training loops (自 R2021a 起) |
dlarray | Deep learning array for customization (自 R2019b 起) |
dlconv | Deep learning convolution (自 R2019b 起) |
dlcwt | Deep learning continuous wavelet transform (自 R2022b 起) |
dlfeval | Evaluate deep learning model for custom training loops (自 R2019b 起) |
dlgradient | Compute gradients for custom training loops using automatic differentiation (自 R2019b 起) |
dlhdl.Target | Configure interface to target board for workflow deployment (自 R2020b 起) |
dlhdl.Workflow | Configure deployment workflow for deep learning neural network (自 R2020b 起) |
dlmodwt | Deep learning maximal overlap discrete wavelet transform and multiresolution analysis (自 R2022a 起) |
dlmtimes | (Not recommended) Batch matrix multiplication for deep learning (自 R2020a 起) |
dlnetwork | Deep learning network for custom training loops (自 R2019b 起) |
dlode45 | Deep learning solution of nonstiff ordinary differential equation (ODE) (自 R2021b 起) |
dlquantizationOptions | Options for quantizing a trained deep neural network (自 R2020a 起) |
dlquantizer | Quantize a deep neural network to 8-bit scaled integer data types (自 R2020a 起) |
dlstft | Deep learning short-time Fourier transform (自 R2021a 起) |
dltranspconv | Deep learning transposed convolution (自 R2019b 起) |
dlupdate | Update parameters using custom function (自 R2019b 起) |
doc2sequence | Convert documents to sequences for deep learning |
dotprod | Dot product weight function |
dropoutLayer | 丢弃层 |
E
edfheader | Create header structure for EDF or EDF+ file (自 R2021a 起) |
edfinfo | Get information about EDF/EDF+ file (自 R2020b 起) |
edfread | Read data from EDF/EDF+ file (自 R2020b 起) |
edfwrite | Create or modify EDF or EDF+ file (自 R2021a 起) |
efficientnetb0 | EfficientNet-b0 卷积神经网络 (自 R2020b 起) |
elliot2sig | Elliot 2 symmetric sigmoid transfer function |
elliotsig | Elliot symmetric sigmoid transfer function |
elmannet | Elman neural network |
eluLayer | Exponential linear unit (ELU) layer (自 R2019a 起) |
embed | Embed discrete data (自 R2020b 起) |
embeddingConcatenationLayer | Embedding concatenation layer (自 R2023b 起) |
encode | Encode input data |
EnergyDistributionDiscriminator | Energy distribution discriminator (自 R2023a 起) |
equalizeLayers | Equalize layer parameters of deep neural network (自 R2022b 起) |
errsurf | Error surface of single-input neuron |
estimateNetworkMetrics | Estimate network metrics for specific layers of a neural network (自 R2022a 起) |
estimateNetworkOutputBounds | Estimate output bounds of deep learning network (自 R2022b 起) |
experiments.Monitor | Update results table and training plots for custom training experiments (自 R2021a 起) |
exportNetworkToTensorFlow | Export Deep Learning Toolbox network or layer graph to TensorFlow (自 R2022b 起) |
exportONNXNetwork | Export network to ONNX model format |
extendts | Extend time series data to given number of timesteps |
extractdata | 从 dlarray 中提取数据 (自 R2019b 起) |
F
fasterRCNNObjectDetector | Detect objects using Faster R-CNN deep learning detector |
fastFlowAnomalyDetector | Detect anomalies using FastFlow network (自 R2023a 起) |
fastRCNNObjectDetector | Detect objects using Fast R-CNN deep learning detector |
fastTextWordEmbedding | Pretrained fastText word embedding |
fcddAnomalyDetector | Detect anomalies using fully convolutional data description (FCDD) network for anomaly detection (自 R2022b 起) |
featureInputLayer | Feature input layer (自 R2020b 起) |
feedforwardnet | 生成前馈神经网络 |
filenames2labels | Get list of labels from filenames (自 R2022b 起) |
findchangepts | Find abrupt changes in signal |
finddim | Find dimensions with specified label (自 R2019b 起) |
findpeaks | 查找局部最大值 |
findPlaceholderLayers | Find placeholder layers in network architecture imported from Keras or ONNX |
fitnet | 函数拟合神经网络 |
fixunknowns | Process data by marking rows with unknown values |
flattenLayer | Flatten layer (自 R2019a 起) |
folders2labels | Get list of labels from folder names (自 R2021a 起) |
formwb | Form bias and weights into single vector |
forward | Compute deep learning network output for training (自 R2019b 起) |
fpderiv | Forward propagation derivative function |
freezeParameters | Convert learnable network parameters in ONNXParameters to
nonlearnable (自 R2020b 起) |
fromnndata | Convert data from standard neural network cell array form |
fScoreMetric | Deep learning F-score metric (自 R2023b 起) |
fullyconnect | Sum all weighted input data and apply a bias (自 R2019b 起) |
fullyConnectedLayer | Fully connected layer |
functionLayer | Function layer (自 R2021b 起) |
functionToLayerGraph | (To be removed) Convert deep learning model function to a layer graph (自 R2019b 起) |
G
gadd | 广义加法 |
gdivide | 广义除法 |
gelu | Apply Gaussian error linear unit (GELU) activation (自 R2022b 起) |
geluLayer | Gaussian error linear unit (GELU) layer (自 R2022b 起) |
generateFunction | Generate a MATLAB function to run the autoencoder |
generateSimulink | Generate a Simulink model for the autoencoder |
genFunction | Generate MATLAB function for simulating shallow neural network |
gensim | Generate Simulink block for shallow neural network simulation |
getelements | Get neural network data elements |
getL2Factor | Get L2 regularization factor of layer learnable parameter |
getLearnRateFactor | Get learn rate factor of layer learnable parameter |
getsamples | Get neural network data samples |
getsignals | Get neural network data signals |
getsiminit | Get Simulink neural network block initial input and layer delays states |
gettimesteps | Get neural network data timesteps |
getwb | Get network weight and bias values as single vector |
globalAveragePooling1dLayer | 1-D global average pooling layer (自 R2021b 起) |
globalAveragePooling2dLayer | 2-D global average pooling layer (自 R2019b 起) |
globalAveragePooling3dLayer | 3-D global average pooling layer (自 R2019b 起) |
globalMaxPooling1dLayer | 1-D global max pooling layer (自 R2021b 起) |
globalMaxPooling2dLayer | Global max pooling layer (自 R2020a 起) |
globalMaxPooling3dLayer | 3-D global max pooling layer (自 R2020a 起) |
gmultiply | 广义乘法 |
gnegate | Generalized negation |
googlenet | GoogLeNet 卷积神经网络 |
gpu2nndata | Reformat neural data back from GPU |
gradCAM | Explain network predictions using Grad-CAM (自 R2021a 起) |
gridtop | Grid layer topology function |
groupedConvolution2dLayer | 2-D grouped convolutional layer (自 R2019a 起) |
groupnorm | Normalize data across grouped subsets of channels for each observation independently (自 R2020b 起) |
groupNormalizationLayer | Group normalization layer (自 R2020b 起) |
groupSubPlot | Group metrics in experiment training plot (自 R2021a 起) |
groupSubPlot | Group metrics in training plot (自 R2022b 起) |
gru | Gated recurrent unit (自 R2020a 起) |
gruLayer | Gated recurrent unit (GRU) layer for recurrent neural network (RNN) (自 R2020a 起) |
gruProjectedLayer | Gated recurrent unit (GRU) projected layer for recurrent neural network (RNN) (自 R2023b 起) |
gsqrt | Generalized square root |
gsubtract | 广义减法 |
H
hardlim | 硬限制传递函数 |
hardlims | 对称硬限制传递函数 |
hasdata | Determine if minibatchqueue can return mini-batch (自 R2020b 起) |
HBOSDistributionDiscriminator | HBOS distribution discriminator (自 R2023a 起) |
hextop | Hexagonal layer topology function |
huber | Huber loss for regression tasks (自 R2021a 起) |
I
image3dInputLayer | 3-D image input layer (自 R2019a 起) |
imageDataAugmenter | Configure image data augmentation |
imageDatastore | 图像数据的数据存储 |
imageInputLayer | Image input layer |
imageLIME | Explain network predictions using LIME (自 R2020b 起) |
importCaffeLayers | Import convolutional neural network layers from Caffe |
importCaffeNetwork | Import pretrained convolutional neural network models from Caffe |
importKerasLayers | (To be removed) Import layers from Keras network |
importKerasNetwork | (To be removed) Import pretrained Keras network and weights |
importNetworkFromONNX | Import ONNX network as MATLAB network (自 R2023b 起) |
importNetworkFromPyTorch | Import PyTorch network as MATLAB network (自 R2022b 起) |
importNetworkFromTensorFlow | Import TensorFlow network as MATLAB network (自 R2023b 起) |
importONNXFunction | Import pretrained ONNX network as a function (自 R2020b 起) |
importONNXLayers | (To be removed) Import layers from ONNX network |
importONNXNetwork | (To be removed) Import pretrained ONNX network |
importTensorFlowLayers | (To be removed) Import layers from TensorFlow network (自 R2021a 起) |
importTensorFlowNetwork | (To be removed) Import pretrained TensorFlow network (自 R2021a 起) |
inceptionresnetv2 | 预训练 Inception-ResNet-v2 卷积神经网络 |
inceptionv3 | Inception-v3 卷积神经网络 |
ind2vec | Convert indices to vectors |
ind2word | Map encoding index to word |
indexing1dLayer | 1-D indexing layer (自 R2023b 起) |
init | Initialize neural network |
initcon | Conscience bias initialization function |
initialize | Initialize learnable and state parameters of a
dlnetwork (自 R2021a 起) |
initlay | 逐层网络初始化函数 |
initlvq | LVQ weight initialization function |
initnw | Nguyen-Widrow layer initialization function |
initwb | By weight and bias layer initialization function |
initzero | Zero weight and bias initialization function |
instancenorm | Normalize across each channel for each observation independently (自 R2021a 起) |
instanceNormalizationLayer | Instance normalization layer (自 R2021a 起) |
isconfigured | Indicate if network inputs and outputs are configured |
isdlarray | Check if object is dlarray
(自 R2020b 起) |
isequal | Check equality of deep learning layer graphs or networks (自 R2021a 起) |
isequaln | Check equality of deep learning layer graphs or networks ignoring
NaN values (自 R2021a 起) |
isInNetworkDistribution | Determine whether data is within the distribution of the network (自 R2023a 起) |
isVocabularyWord | Test if word is member of word embedding or encoding |
L
l1loss | L1 loss for regression tasks (自 R2021b 起) |
l2loss | L2 loss for regression tasks (自 R2021b 起) |
labeledSignalSet | Create labeled signal set |
Layer | Network layer for deep learning |
layerGraph | Graph of network layers for deep learning |
layernorm | Normalize data across all channels for each observation independently (自 R2021a 起) |
layerNormalizationLayer | Layer normalization layer (自 R2021a 起) |
layrecnet | Layer recurrent neural network |
lbfgsState | State of limited-memory BFGS (L-BFGS) solver (自 R2023a 起) |
lbfgsupdate | Update parameters using limited-memory BFGS (L-BFGS) (自 R2023a 起) |
leakyrelu | Apply leaky rectified linear unit activation (自 R2019b 起) |
leakyReluLayer | Leaky Rectified Linear Unit (ReLU) layer |
learncon | Conscience bias learning function |
learngd | Gradient descent weight and bias learning function |
learngdm | Gradient descent with momentum weight and bias learning function |
learnh | Hebb weight learning rule |
learnhd | Hebb with decay weight learning rule |
learnis | Instar weight learning function |
learnk | Kohonen weight learning function |
learnlv1 | LVQ1 weight learning function |
learnlv2 | LVQ2.1 weight learning function |
learnos | Outstar weight learning function |
learnp | Perceptron weight and bias learning function |
learnpn | Normalized perceptron weight and bias learning function |
learnsom | Self-organizing map weight learning function |
learnsomb | Batch self-organizing map weight learning function |
learnwh | Widrow-Hoff weight/bias learning function |
linearlayer | Create linear layer |
linkdist | Link distance function |
loadTFLiteModel | Load TensorFlow Lite model (自 R2022a 起) |
logsig | Log-sigmoid 传递函数 |
lstm | Long short-term memory (自 R2019b 起) |
lstmLayer | Long short-term memory (LSTM) layer for recurrent neural network (RNN) |
lstmProjectedLayer | Long short-term memory (LSTM) projected layer for recurrent neural network (RNN) (自 R2022b 起) |
lvqnet | Learning vector quantization neural network |
lvqoutputs | LVQ outputs processing function |
M
mae | 均值绝对误差性能函数 |
mandist | Manhattan distance weight function |
mapminmax | 通过将行最小值和最大值映射到 [-1 1 ] 来处理矩阵 |
mapstd | Process matrices by mapping each row’s means to 0 and deviations to 1 |
maskrcnn | Detect objects using Mask R-CNN instance segmentation (自 R2021b 起) |
matlab.io.datastore.BackgroundDispatchable | (Not recommended) Add prefetch reading support to datastore |
matlab.io.datastore.BackgroundDispatchable.readByIndex | (Not recommended) Return observations specified by index from datastore |
matlab.io.datastore.MiniBatchable | Add mini-batch support to datastore |
matlab.io.datastore.MiniBatchable.read | (Not recommended) Read data from custom mini-batch datastore |
matlab.io.datastore.PartitionableByIndex | (Not recommended) Add parallelization support to datastore |
matlab.io.datastore.PartitionableByIndex.partitionByIndex | (Not recommended) Partition datastore according to indices |
maxlinlr | Maximum learning rate for linear layer |
maxpool | Pool data to maximum value (自 R2019b 起) |
maxPooling1dLayer | 1-D max pooling layer (自 R2021b 起) |
maxPooling2dLayer | Max pooling layer |
maxPooling3dLayer | 3-D max pooling layer (自 R2019a 起) |
maxunpool | Unpool the output of a maximum pooling operation (自 R2019b 起) |
maxUnpooling2dLayer | Max unpooling layer |
meanabs | 一个或多个矩阵包含的绝对值元素的均值 |
meansqr | Mean of squared elements of matrix or matrices |
midpoint | Midpoint weight initialization function |
minibatchqueue | Create mini-batches for deep learning (自 R2020b 起) |
minmax | 矩阵行的范围 |
mobilenetv2 | MobileNet-v2 卷积神经网络 (自 R2019a 起) |
modwt | Maximal overlap discrete wavelet transform |
modwtLayer | Maximal overlap discrete wavelet transform (MODWT) layer (自 R2022b 起) |
mse | Half mean squared error (自 R2019b 起) |
mse | 均方归一化误差性能函数 |
multiplicationLayer | Multiplication layer (自 R2020b 起) |
N
narnet | Nonlinear autoregressive neural network |
narxnet | Nonlinear autoregressive neural network with external input |
nasnetlarge | 预训练 NASNet-Large 卷积神经网络 (自 R2019a 起) |
nasnetmobile | 预训练的 NASNet-Mobile 卷积神经网络 (自 R2019a 起) |
nctool | 打开神经网络聚类 |
negdist | Negative distance weight function |
netinv | 逆传递函数 |
netprod | Product net input function |
netsum | Sum net input function |
network | Convert Autoencoder object into network object |
network | 创建自定义浅层神经网络 |
networkDataLayout | Deep learning network data layout for learnable parameter initialization (自 R2022b 起) |
networkDistributionDiscriminator | Deep learning distribution discriminator (自 R2023a 起) |
neuralODELayer | Neural ODE layer (自 R2023b 起) |
neuronPCA | Principal component analysis of neuron activations (自 R2022b 起) |
newgrnn | 设计广义回归神经网络 |
newlind | Design linear layer |
newpnn | Design probabilistic neural network |
newrb | 设计径向基网络 |
newrbe | 设计精确的径向基网络 |
next | Obtain next mini-batch of data from minibatchqueue (自 R2020b 起) |
nftool | 打开神经网络拟合 |
nncell2mat | Combine neural network cell data into matrix |
nncorr | Cross correlation between neural network time series |
nndata | Create neural network data |
nndata2gpu | Format neural data for efficient GPU training or simulation |
nndata2sim | Convert neural network data to Simulink time series |
nnsize | Number of neural data elements, samples, timesteps, and signals |
nntool | (已删除)打开网络/数据管理器 |
nntraintool | (Removed) Neural network training tool |
noloop | Remove neural network open- and closed-loop feedback |
normc | 归一化矩阵的列 |
normprod | Normalized dot product weight function |
normr | 归一化矩阵的行 |
nprtool | 打开神经网络模式识别 |
ntstool | 打开神经网络时间序列 |
num2deriv | Numeric two-point network derivative function |
num5deriv | Numeric five-point stencil neural network derivative function |
numelements | Number of elements in neural network data |
numfinite | Number of finite values in neural network data |
numnan | Number of NaN values in neural network data |
numsamples | Number of samples in neural network data |
numsignals | Number of signals in neural network data |
numtimesteps | Number of time steps in neural network data |
O
occlusionSensitivity | Explain network predictions by occluding the inputs (自 R2019b 起) |
ODINDistributionDiscriminator | ODIN distribution discriminator (自 R2023a 起) |
onehotdecode | Decode probability vectors into class labels (自 R2020b 起) |
onehotencode | Encode data labels into one-hot vectors (自 R2020b 起) |
ONNXParameters | Parameters of imported ONNX network for deep learning (自 R2020b 起) |
openl3 | OpenL3 neural network (自 R2021a 起) |
openl3Embeddings | Extract OpenL3 feature embeddings (自 R2022a 起) |
openloop | Convert neural network closed-loop feedback to open loop |
P
paddata | Pad data by adding elements (自 R2023b 起) |
padsequences | Pad or truncate sequence data to same length (自 R2021a 起) |
partition | Partition minibatchqueue (自 R2020b 起) |
partitionByIndex | Partition augmentedImageDatastore according to
indices |
patchCoreAnomalyDetector | Detect anomalies using PatchCore network (自 R2023a 起) |
patchEmbeddingLayer | Patch embedding layer (自 R2023b 起) |
patternnet | 生成模式识别网络 |
perceptron | 简单的单层二类分类器 |
perform | Calculate network performance |
pitchnn | Estimate pitch with deep learning neural network (自 R2021a 起) |
pixelLabelDatastore | Datastore for pixel label data |
PlaceholderLayer | Layer replacing an unsupported Keras or ONNX layer |
plot | 绘制神经网络架构 |
plot | Plot receiver operating characteristic (ROC) curves and other performance curves (自 R2022b 起) |
plotconfusion | 绘制分类混淆矩阵 |
plotep | Plot weight-bias position on error surface |
ploterrcorr | Plot autocorrelation of error time series |
ploterrhist | 绘图误差直方图 |
plotes | Plot error surface of single-input neuron |
plotfit | 绘图函数拟合 |
plotinerrcorr | Plot input to error time-series cross-correlation |
plotpc | Plot classification line on perceptron vector plot |
plotperform | 绘制网络性能图 |
plotpv | 绘制感知器输入/目标向量 |
plotregression | 绘制线性回归图 |
plotresponse | Plot dynamic network time series response |
plotroc | 绘制受试者工作特征图 |
plotsom | Plot self-organizing map |
plotsomhits | 绘制自组织映射采样命中数 |
plotsomnc | Plot self-organizing map neighbor connections |
plotsomnd | Plot self-organizing map neighbor distances |
plotsomplanes | Plot self-organizing map weight planes |
plotsompos | Plot self-organizing map weight positions |
plotsomtop | 绘制自组织映射拓扑 |
plottrainstate | 绘制训练状态值图 |
plotv | (To be removed) Plot vectors as lines from origin |
plotvec | Plot vectors with different colors |
plotwb | Plot Hinton diagram of weight and bias values |
plotWeights | Plot a visualization of the weights for the encoder of an autoencoder |
pnormc | Pseudonormalize columns of matrix |
pointnetplusLayers | Create PointNet++ segmentation network (自 R2021b 起) |
pointPillarsObjectDetector | PointPillars object detector (自 R2021b 起) |
positionEmbeddingLayer | Position embedding layer (自 R2023b 起) |
poslin | 正线性传递函数 |
precisionMetric | Deep learning precision metric (自 R2023b 起) |
predict | Predict responses using trained deep learning neural network |
predict | Compute deep learning network output for inference (自 R2019b 起) |
predict | Compute deep learning network output for inference by using a TensorFlow Lite model (自 R2022a 起) |
predict | Reconstruct the inputs using trained autoencoder |
predictAndUpdateState | Predict responses using a trained recurrent neural network and update the network state |
preparets | Prepare input and target time series data for network simulation or training |
processpca | Process columns of matrix with principal component analysis |
ProjectedLayer | Compressed neural network layer via projection (自 R2023b 起) |
prune | Delete neural inputs, layers, and outputs with sizes of zero |
prunedata | Prune data for consistency with pruned network |
purelin | 线性传递函数 |
Q
quant | 将值离散化为数量的倍数 |
quantizationDetails | Display quantization details for a neural network (自 R2022a 起) |
quantize | Quantize deep neural network (自 R2022a 起) |
R
radbas | 径向基传递函数 |
radbasn | Normalized radial basis transfer function |
randnc | Normalized column weight initialization function |
randnr | Normalized row weight initialization function |
randomPatchExtractionDatastore | Datastore for extracting random 2-D or 3-D random patches from images or pixel label images |
rands | 对称随机权重/偏置初始化函数 |
randsmall | Small random weight/bias initialization function |
randtop | Random layer topology function |
rcnnObjectDetector | Detect objects using R-CNN deep learning detector |
read | Read data from augmentedImageDatastore |
readByIndex | Read data specified by index from
augmentedImageDatastore |
readWordEmbedding | Read word embedding from file |
recallMetric | Deep learning recall metric (自 R2023b 起) |
recordMetrics | Record metric values in experiment results table and training plot (自 R2021a 起) |
recordMetrics | Record metric values for custom training loops (自 R2022b 起) |
regression | (Not recommended) Perform linear regression of shallow network outputs on targets |
regressionLayer | 回归输出层 |
RegressionOutputLayer | 回归输出层 |
relu | 应用修正线性单元激活 (自 R2019b 起) |
reluLayer | 修正线性单元 (ReLU) 层 |
removeconstantrows | Process matrices by removing rows with constant values |
removedelay | Remove delay to neural network’s response |
removeLayers | Remove layers from layer graph or network |
removeParameter | Remove parameter from ONNXParameters object (自 R2020b 起) |
removerows | Process matrices by removing rows with specified indices |
replaceLayer | Replace layer in layer graph or network |
reset | Reset minibatchqueue to start of data (自 R2020b 起) |
resetState | Reset state parameters of neural network |
resize | Resize data by adding or removing elements (自 R2023b 起) |
resnet101 | ResNet-101 卷积神经网络 |
resnet18 | ResNet-18 卷积神经网络 |
resnet3dLayers | Create 3-D residual network (自 R2021b 起) |
resnet50 | ResNet-50 卷积神经网络 |
resnetLayers | Create 2-D residual network (自 R2021b 起) |
revert | Change network weights and biases to previous initialization values |
risetime | Rise time of positive-going bilevel waveform transitions |
rmseMetric | Deep learning root mean squared error metric (自 R2023b 起) |
rmspropupdate | Update parameters using root mean squared propagation (RMSProp) (自 R2019b 起) |
roc | Receiver operating characteristic |
rocmetrics | Receiver operating characteristic (ROC) curve and performance metrics for binary and multiclass classifiers (自 R2022b 起) |
S
sae | Sum absolute error performance function |
satlin | 饱和线性传递函数 |
satlins | 对称饱和线性传递函数 |
scalprod | Scalar product weight function |
segmentCells2D | Segment 2-D image using Cellpose (自 R2023b 起) |
segmentCells3D | Segment 3-D image volume using Cellpose (自 R2023b 起) |
segnetLayers | Create SegNet layers for semantic segmentation |
selfAttentionLayer | Self-attention layer (自 R2023a 起) |
selforgmap | 自组织映射 |
separateSpeakers | Separate signal by speakers (自 R2023b 起) |
separatewb | Separate biases and weight values from weight/bias vector |
seq2con | Convert sequential vectors to concurrent vectors |
sequenceFoldingLayer | Sequence folding layer (自 R2019a 起) |
sequenceInputLayer | Sequence input layer |
sequenceUnfoldingLayer | Sequence unfolding layer (自 R2019a 起) |
SeriesNetwork | 用于深度学习的串行网络 |
setelements | Set neural network data elements |
setL2Factor | Set L2 regularization factor of layer learnable parameter |
setLearnRateFactor | Set learn rate factor of layer learnable parameter |
setsamples | Set neural network data samples |
setsignals | Set neural network data signals |
setsiminit | Set neural network Simulink block initial conditions |
settimesteps | Set neural network data timesteps |
setwb | Set all network weight and bias values with single vector |
sgdmupdate | Update parameters using stochastic gradient descent with momentum (SGDM) (自 R2019b 起) |
show | Show training information plot (自 R2023b 起) |
shuffle | Shuffle data in augmentedImageDatastore |
shuffle | Shuffle data in minibatchqueue (自 R2020b 起) |
shufflenet | 预训练 ShuffleNet 卷积神经网络 (自 R2019a 起) |
sigmoid | 应用 sigmoid 激活 (自 R2019b 起) |
sigmoidLayer | Sigmoid layer (自 R2020b 起) |
signalDatastore | Datastore for collection of signals (自 R2020a 起) |
signalFrequencyFeatureExtractor | Streamline signal frequency feature extraction (自 R2021b 起) |
signalLabelDefinition | Create signal label definition |
signalMask | Modify and convert signal masks and extract signal regions of interest (自 R2020b 起) |
signalTimeFeatureExtractor | Streamline signal time feature extraction (自 R2021a 起) |
sigrangebinmask | Label signal samples with values within a specified range (自 R2023a 起) |
sim | Simulate neural network |
sim2nndata | Convert Simulink time series to neural network data |
sinusoidalPositionEncodingLayer | Sinusoidal position encoding layer (自 R2023b 起) |
softmax | Apply softmax activation to channel dimension (自 R2019b 起) |
softmax | softmax 传递函数 |
softmaxLayer | Softmax 层 |
solov2 | Segment objects using SOLOv2 instance segmentation network (自 R2023b 起) |
sortClasses | Sort classes of confusion matrix chart |
splitlabels | Find indices to split labels according to specified proportions (自 R2021a 起) |
squeezenet | SqueezeNet 卷积神经网络 |
squeezesegv2Layers | Create SqueezeSegV2 segmentation network for organized lidar point cloud (自 R2020b 起) |
srchbac | 1-D minimization using backtracking |
srchbre | 1-D interval location using Brent’s method |
srchcha | 1-D minimization using Charalambous' method |
srchgol | 1-D minimization using golden section search |
srchhyb | 1-D minimization using a hybrid bisection-cubic search |
ssdObjectDetector | Detect objects using SSD deep learning detector (自 R2020a 起) |
sse | Sum squared error performance function |
stack | Stack encoders from several autoencoders together |
staticderiv | Static derivative function |
stft | Short-time Fourier transform (自 R2019a 起) |
stftLayer | Short-time Fourier transform layer (自 R2021b 起) |
stftmag2sig | Signal reconstruction from STFT magnitude (自 R2020b 起) |
stripdims | Remove dlarray data format (自 R2019b 起) |
sumabs | 一个或多个矩阵的绝对元素之和 |
summary | Print network summary (自 R2022b 起) |
sumsqr | 一个或多个矩阵的元素的平方和 |
swishLayer | Swish layer (自 R2021a 起) |
T
tanhLayer | 双曲正切 (tanh) 层 (自 R2019a 起) |
tansig | 双曲正切 sigmoid 传递函数 |
tapdelay | Shift neural network time series data for tap delay |
taylorPrunableNetwork | Network that can be pruned by using first-order Taylor approximation (自 R2022a 起) |
TFLiteModel | TensorFlow Lite model (自 R2022a 起) |
timedelaynet | Time delay neural network |
tonndata | Convert data to standard neural network cell array form |
train | 训练浅层神经网络 |
trainAutoencoder | Train an autoencoder |
trainb | Batch training with weight and bias learning rules |
trainbfg | BFGS quasi-Newton backpropagation |
trainbr | Bayesian regularization backpropagation |
trainbu | Batch unsupervised weight/bias training |
trainc | Cyclical order weight/bias training |
traincgb | Conjugate gradient backpropagation with Powell-Beale restarts |
traincgf | Conjugate gradient backpropagation with Fletcher-Reeves updates |
traincgp | Conjugate gradient backpropagation with Polak-Ribiére updates |
traingd | Gradient descent backpropagation |
traingda | Gradient descent with adaptive learning rate backpropagation |
traingdm | Gradient descent with momentum backpropagation |
traingdx | Gradient descent with momentum and adaptive learning rate backpropagation |
TrainingInfo | Neural network training information (自 R2023b 起) |
trainingOptions | Options for training deep learning neural network |
TrainingOptionsADAM | Training options for Adam optimizer |
TrainingOptionsLBFGS | Training options for limited-memory BFGS (L-BFGS) optimizer (自 R2023b 起) |
TrainingOptionsRMSProp | Training options for RMSProp optimizer |
TrainingOptionsSGDM | Training options for stochastic gradient descent with momentum |
trainingProgressMonitor | Monitor and plot training progress for deep learning custom training loops (自 R2022b 起) |
trainlm | 莱文贝格-马夸特反向传播 |
trainnet | Train deep learning neural network (自 R2023b 起) |
trainNetwork | 训练神经网络 |
trainoss | One-step secant backpropagation |
trainPointPillarsObjectDetector | Train PointPillars object detector (自 R2021b 起) |
trainr | Random order incremental training with learning functions |
trainrp | Resilient backpropagation |
trainru | Unsupervised random order weight/bias training |
trains | Sequential order incremental training with learning functions |
trainscg | Scaled conjugate gradient backpropagation |
trainSoftmaxLayer | Train a softmax layer for classification |
trainWordEmbedding | Train word embedding |
transform | 变换数据存储 (自 R2019a 起) |
TransformedDatastore | 用于变换基础数据存储的数据存储 (自 R2019a 起) |
transposedConv1dLayer | Transposed 1-D convolution layer (自 R2022a 起) |
transposedConv2dLayer | Transposed 2-D convolution layer |
transposedConv3dLayer | Transposed 3-D convolution layer (自 R2019a 起) |
TransposedConvolution1DLayer | Transposed 1-D convolution layer (自 R2022a 起) |
TransposedConvolution2DLayer | Transposed 2-D convolution layer |
TransposedConvolution3dLayer | Transposed 3-D convolution layer (自 R2019a 起) |
tribas | 三角形基传递函数 |
trimdata | Trim data by removing elements (自 R2023b 起) |
tritop | Triangle layer topology function |
U
unconfigure | Unconfigure network inputs and outputs |
unet3dLayers | Create 3-D U-Net layers for semantic segmentation of volumetric images (自 R2019b 起) |
unetLayers | Create U-Net layers for semantic segmentation |
unfreezeParameters | Convert nonlearnable network parameters in ONNXParameters to
learnable (自 R2020b 起) |
unpackProjectedLayers | Unpack projected layers of neural network (自 R2023b 起) |
updateInfo | Update information columns in experiment results table (自 R2021a 起) |
updateInfo | Update information values for custom training loops (自 R2022b 起) |
updatePrunables | Remove filters from prunable layers based on importance scores (自 R2022a 起) |
updateScore | Compute and accumulate Taylor-based importance scores for pruning (自 R2022a 起) |
V
vadnet | Voice activity detection (VAD) neural network (自 R2023a 起) |
validate | Quantize and validate a deep neural network (自 R2020a 起) |
vec2ind | Convert vectors to indices |
vec2word | Map embedding vector to word |
verifyNetworkRobustness | Verify adversarial robustness of deep learning network (自 R2022b 起) |
vgg16 | VGG-16 卷积神经网络 |
vgg19 | VGG-19 卷积神经网络 |
vggish | VGGish neural network (自 R2020b 起) |
vggishEmbeddings | Extract VGGish feature embeddings (自 R2022a 起) |
view | 查看浅层神经网络 |
view | View autoencoder |
visionTransformer | Pretrained vision transformer (ViT) neural network (自 R2023b 起) |
W
waveletScattering | Wavelet time scattering |
word2ind | Map word to encoding index |
word2vec | Map word to embedding vector |
wordEmbedding | Word embedding model to map words to vectors and back |
wordEmbeddingLayer | Word embedding layer for deep learning neural network |
wordEncoding | Word encoding model to map words to indices and back |
writeWordEmbedding | Write word embedding file |
X
xception | Xception 卷积神经网络 (自 R2019a 起) |
Y
yamnet | YAMNet neural network (自 R2020b 起) |
yolov2ObjectDetector | Detect objects using YOLO v2 object detector (自 R2019a 起) |
yolov3ObjectDetector | Detect objects using YOLO v3 object detector (自 R2021a 起) |
yolov4ObjectDetector | Detect objects using YOLO v4 object detector (自 R2022a 起) |
yoloxObjectDetector | Detect objects using YOLOX object detector (自 R2023b 起) |