Changelog#
All notable changes to complextorch are documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
2.0.0#
Added#
New top-level subpackages:
complextorch.signal(pwelch),complextorch.transforms(torchcvnn-style dataloader transforms —LogAmplitude,FFT2,IFFT2,FFTResize,PolSAR,Normalize,RandomPhase, …),complextorch.datasets(SAR / MRI dataset surface;SAMPLEandSLCDatasetare full implementations, the SAR/MRI-specific readers are present as importable stubs with upstream pointers), andcomplextorch.models(Vision Transformer withvit_t/s/b/l/hpresets).complextorch.nn.init:kaiming_normal_,kaiming_uniform_,xavier_normal_,xavier_uniform_,trabelsi_standard_,trabelsi_independent_— variance-correct complex weight initialisers. (PyTorch’s built-ins treat real and imaginary parts independently, which is wrong for complex magnitude.)complextorch.nn.relevance(complex Variational Dropout & Automatic Relevance Determination) andcomplextorch.nn.masked(fixed-mask sparsified layers) subsystems for learned-sparsity workflows. AddsLinearVD,LinearARD,BilinearVD/ARD,Conv{1,2,3}dVD/ARD,LinearMasked/Conv*dMasked, plus the deploy/extract helpersnamed_penalties,compute_ard_masks,deploy_masks. Requiresscipy(new runtime dependency).RNN family:
GRUCell,GRU,LSTMCell,LSTM(cell-based, with optionalbatchnorm=Truefor stable deep stacks).Transformer family:
TransformerEncoderLayer,TransformerEncoder,TransformerDecoderLayer,TransformerDecoder,Transformer.Normalisation:
RMSNorm,GroupNorm,NaiveBatchNorm{1,2,3}d(split-form baseline). The functional whitening helpers (whiten2x2_batch_norm,whiten2x2_layer_norm,inv_sqrtm2x2,batch_norm,layer_norm) are now public incomplextorch.nn.functional.Pooling:
MagMaxPool{1,2,3}d(magnitude-argmax, the canonical complex max-pool —torch.nn.MaxPool*ddoesn’t define>on complex),AvgPool{1,2,3}d.Channel dropout:
Dropout1d,Dropout2d,Dropout3dwith shared real/imag mask (Trabelsi 2018).Upsampling:
Upsample(split real/imag) andPolarUpsample(phase-preserving polar form).Activations:
CELU,CCELU,CGELU(split-type-A ELU/CELU/GELU +CVSplit*aliases),zAbsReLU,zLeakyReLU(first-quadrant + leaky variants),Mod(magnitude as module),AdaptiveModReLU(per-channel learnable threshold). ExistingmodReLUgains alearnable=Trueflag for a scalar trainable threshold.Layers:
Bilinear(withconjugate=True/False),InterleavedToComplex/ComplexToInterleaved/ConcatenatedToComplex/ComplexToConcatenated/RealToComplex(layout-conversion modules),PhaseShift(learnable per-channel phase rotation).Loss:
MSELossmatchingtorch.nn.MSELossexactly (no 1/2 factor — distinct fromCVQuadError).Optional dependencies gated behind extras:
complextorch[datasets]pulls inh5py;complextorch[datasets-alos]pulls inrasterio.Comprehensive test suite under
tests/, mirroring thecomplextorch/tree 1:1 (~490 tests). Covers every public class and helper, including Fast/Slow numerical equivalence (state-dict-aligned weights), full loss reduction matrix + invalid-reduction checks, Hypothesis-driven round-trip invariants (polar, casting, FFT),scipy.special.expiparity +gradcheckfor_expi, and a parameterized sweep over the 11 dataset stubs.[test]extras now pull inpytest-xdist(parallel runs via-n auto) andhypothesis(property tests).
Changed#
BREAKING:
MultiheadAttention/ScaledDotProductAttentionnow use the Hermitian inner productQKᴴ(wasQKᵀ— a math bug). Newsoftmax_on='complex'|'real'flag selects the attention-weight semantics; default'complex'keeps the existingCVSoftMaxbehaviour.BREAKING:
Linear/SlowLinear/ fastConv{1,2,3}d/ fastConvTranspose{1,2,3}ddefaultbias=Trueto matchtorch.nn. Passbias=Falseexplicitly if you relied on the old default.CI enforces
--cov-fail-under=100on Python 3.10 / 3.11 / 3.12 — any PR that drops line coverage fails automatically. Coverage config (omit list,exclude_linesforraise NotImplementedError/pragma: no cover/if TYPE_CHECKING:/@overload) lives inpyproject.toml.Documentation migrated to PyData Sphinx Theme + MyST + sphinx-autoapi. The API reference is now auto-generated from docstrings; per-module
.rststubs no longer need to be maintained by hand.docs/now ships an executable Getting Started notebook (myst-nb) which re-runs on every build, so the public-API examples cannot rot.Intersphinx links to PyTorch / NumPy / SciPy so
:class:torch.nn.*references resolve.
Fixed#
PerpLossSSIM.forwardwas passing the complex(x, y)pair to the real-only SSIM conv, raisingRuntimeErroron first use. Now passes the precomputed magnitudes (matching the cited perpendicular-loss reference).Removed dead branches surfaced by the coverage push: an unreachable
elif mask_in_missing:arm inBaseMasked._load_from_state_dict(PyTorch’sload_state_dicthard-codesstrict=Truewhen calling_load_from_state_dict, so the precondition is never met), anif weight.is_complex():check inMaskedWeightMixin.sparsitywhose two branches returned identical values, the real-input fallbacks intransforms._resize_spectrum(only called with complex spectra fromFFTResize), and the unused_maybe_bnhelper inrnn.py.
1.2.0#
Removed#
The legacy
CVTensorAPI and its supporting helpers (cat,roll,from_polar,randn, and thetorch.Tensor.rect/torch.Tensor.polarmonkey-patch) have been removed. The package now operates exclusively on complex-dtypetorch.Tensor(typicallytorch.cfloat). Usetorch.polar(abs, angle)andtorch.randn(..., dtype=torch.cfloat)directly.
Fixed#
Correctness in
SlowLinear/SlowConv*/SlowConvTranspose*— the Gauss-trick bias was previously off byb_i * (1 + j)whenbias=True.SlowConv*andSlowConvTranspose*now correctly forwarddilationandoutput_padding. The fast (native-cfloat) wrappers were unaffected.Complex-valued
BatchNorm*eval-mode no longer broadcastsrunning_meanagainst the wrong axes.PhaseSigmoidis now implemented (previously was an empty class).MagMinMaxNormnow correctly preserves phase (previously it subtracted a real scalar from a complex tensor).
Added#
Fast
ConvTranspose1d/ConvTranspose2d/ConvTranspose3dare now exported fromcomplextorch.nn. Theiroutput_paddingdefault matches PyTorch’s (0).Complex-valued losses (
CVQuadError,CVFourthPowError,CVCauchyError,CVLogCoshError,CVLogError) now accept areductionargument ('mean'|'sum'|'none'), defaulting to'mean'.complextorch.nn.Conv1d(and its 2-D / 3-D / transposed siblings) wraptorch.nn.Conv1dwithdtype=torch.cfloatfor maximum efficiency. The hand-rolled real/imag-split convolutions remain available under theSlowprefix.