![]() Otherwise, the None data will be obtained during graph composition because the graph input is transferred only during graph execution and the input data cannot be obtained during graph composition. Why is the value lost?Ī: The multiples input of the Tile operator must be a constant (The value cannot directly or indirectly come from the input of the graph). Q: When the Tile operator in operations executes _infer_, the value is None. In addition, the loss corresponding to padding_idx is filtered out through the mask operation during training. In MindSpore, you can manually initialize the weight corresponding to the padding_idx position of embedding to 0. Can other operators implement this operation?Ī: In PyTorch, padding_idx is used to set the word vector in the padding_idx position in the embedding matrix to 0, and the word vector in the padding_idx position is not updated during backward propagation. Q: Compared with PyTorch, the nn.Embedding layer lacks the padding operation. Q: Can MindSpore calculate the variance of any tensor?Ī: Currently, MindSpore does not have APIs or operators which can directly calculate the variance of a tensor. Q: Does MindSpore support matrix transposition?Ī: Yes. However, the number of group must be the same as the number of input and output channels. Currently, only the nn.Conv2d API of MindSpore supports group convolution. Currently, this operator does not support a value of group that is greater than 1. Is it necessary to ensure that the value of group can be exactly divided by the input and output dimensions? How is the group parameter transferred?Ī: The Conv2d operator has the following constraint: When the value of group is greater than 1, the value must be the same as the number of input and output channels. Q: When Conv2D is used to define convolution, the group parameter is used. ![]() What is a better solution (running in dynamic mode) for Concat to concatenate tuples containing multiple Tensors?Ī: The number of tensors to be concatenated at a time cannot exceed 192 according to the bottom-layer specifications of the Ascend operator. An error occurs when the number of tensor list elements entered is greater than or equal to 192. Q: An error occurs when the Concat operator concatenates tuples containing multiple tensors. Huawei Ascend supports 5D format operations, and uses the transdata operator to convert data from 4D to 5D to improve performance. In this case, the framework automatically inserts the TransData operator to convert the data formats into the same format and then performs computation. Q: What is the function of the TransData operator? Can the performance be optimized?Ī: The TransData operator is used in the scenario where the data formats (such as NC1HWC0) used by interconnected operators on the network are inconsistent. Q: In the construct function of the static graph mode, how do I remove all negative values contained in a tensor?Ī: You are advised to use the ops.clip_by_value interface to change all negative numbers to 0 for computation. For example, ncatenate is used to replace the ops.concat for computation. In the post-processing phase (in a non-network calculation process, that is, in a non- construct function), numpy can be directly used for computation. create_dict_iterator ( output_numpy = True ) Environment Preparation and Information Acquisition.TensorFlow and MindSpore API Mapping Table.PyTorch and MindSpore API Mapping Table.High Performance Data Processing Engine.Graph-Kernel Fusion Acceleration Engine.Combination of Dynamic and Static Graphs.
0 Comments
Leave a Reply. |