Releases: USTC-TNS/TNSP
Releases · USTC-TNS/TNSP
v0.3.3
Added
- tetragono: Add mirror direct sampling which is only used for Gibbs state on square lattice, and maintains the
symmetry of the Gibbs state. - TAT.py: Add
dtype
andbtype
class member forTensor
object, which makes it easier to communicate with numpy. - scalapack.py: Add a python wrapper for scalapack.
Changed
- tetragono: Use the
PyScalapack
to speed up min-SR method. User need to specify the path of scalapack dynamic link
libraries by parameterscalapack_libraries
forgm_run
whennatural_gradient_by_direct_pseudo_inverse
enabled. - TAT.py: Change the module alias name convension,
float
andcomplex
without bytes specified would be considered
as double precision now.
Fixed
- tetragono: Fix a bug when try to save a file with directory name. The previous program only allows to save file into
the current directory.
v0.3.2
Added
- tetragono: Add
natural_gradient_by_direct_pseudo_inverse
to calculate natural gradient for sampling lattice. And
add parameteruse_natural_gradient_by_direct_pseudo_inverse
(default is False),natural_gradient_r_pinv
and
natural_gradient_a_pinv
for high/mid-level API ingm_run
to use direct pseudo inverse to calculate natural
gradient. - tetraku: Add Hamiltonian to another part of density matrix for density matrix of Heisenberg/Hubbard/tJ model. It is
to ensure the result density is unitary despite of errors introduced by contract and approximation. This is controled
by a new parameterside
which is either1
or2
, default is1
, which has the same behavior with before.
Changed
- TAT.py:
sqrt
will calculate the square root of absoluate value in tensor elementwisely, instead of square root of
value itself and returnnan
for negative number. - tetraux: Move
Configuration
for ansatz product state to an individual package namedtetraux
fromTAT.py
, which
is not related to the tensor itself.
Deprecated
- tetragono:
natural_gradient
for observer object is deprecated, users should specify the method to calculate
natural gradient explicitly, which arenatural_gradient_by_direct_pseudo_inverse
and
natural_gradient_by_conjugate_gradient
.
v0.3.1
Added
- TAT.py: Add binding for functions of Edge introduced in v0.3.0 such as
point_by_index
.
Changed
- TAT.py: Update the function arguments names to keep the same with those in c++ side.
- TAT.py: Remove navigator of TAT.py to get tensor type directly, please use module alias instead. For example,
previous code such as `TAT("No", np.float64)` should be updated to `TAT.Normal.float64.Tensor`.
Removed
- TAT.py: Remove optional FastName binding, which is useless in python side in fact.
v0.3.0
Added
- tetragono: Tetragono will print backtrace of the current process when receiving SIGUSR1.
- tetragono: Add squash support for sampling lattice.
Changed
- TAT: Use the multidimension span to record blocks in tensor, instead of the previous map data structure, some
related API is also updated. Detail update is followed:- About data
- Tensor blocks is stored in a new order other than old version, the previous use a map from symmetry list to data
block, which follows the lexicographical order of symmetry list. The new order follows the lexicographical order
of the symmetry position list for a data block. Inside the data structure, the blocks are stored in a simple and
raw tensor like structure calledmultidimension_span
. - Because of the block order update, use random number to fill a tensor will return a different one other than the
previous version even with the same random seed. - The edge is now assumed stable, That is to say the edge will not lose any segment during operations. In the
previous version, the edge segment will be erased if no block in the tensor using that segment.
- Tensor blocks is stored in a new order other than old version, the previous use a map from symmetry list to data
- About edge API
- The type
edge_segment_t
is renamed toedge_segments_t
because it is really several segments, not only one
segment. - Some old function was renamed, such as
get_point_from_index
topoint_by_index
. The old name is deprecated and
will be removed later. - Drop the support for reorder segments.
- Use
edge.segment()
to obtain the real segment for an edge, instead of the original way to access member
edge.segment
directly.
- The type
- About tensor API
- Some old function was renamed, such as
get_rank_from_name
torank_by_name
. The old name is deprecated and will
be removed later. - Use
tensor.names()
to obtain the tensor edge names, instead of the original way to access membertensor.names
directly. - Because the edge is stable now, scalar operations on two tensor with segment and block mising is not allowed now.
- Some old function was renamed, such as
- About data
- tetragono: Update line search strategy, remove
line_search_error_threshold
, addline_search_parameter
in
ap_run
andgm_run
.line_search_parameter
multipliedstep_size
obtained by line search will be the real step
size used to update the state.
Deprecated
Removed
- tetragono:
gm_data_load
is removed, please usegm_hamiltonian
to replace the hamiltonian instead. - wrapper:
wrapper_TAT
is removed.
Fixed
- tetragono: Fix the wrong error message when trying to import module used by
ex_create
,ap_ansatz_mul
and so on.
v0.2.23
Added
- tetragono: Add
ap_hamiltonian
to replace the hamiltonian of the ansatz product state in tetragono shell. - tetragono: Add
multichain_number
forap_run
, which will run multiple chains inside the same MPI process. - wrapper: Add python package
wrapper_TAT
to provide a wrapper over torch to provide similar interface asTAT.py
. - tetragono: Add
observe_max_batch_size
option forap_run
, which will set the max limit of batch size when
calculating wss.
Deprecated
- tetragono:
gm_data_load
is deprecated, it will be removed in the future, please usegm_hamiltonian
to replace
the hamiltonian instead.
Removed
- tetragono:
save_state_interval
option forgm_run
andap_run
is removed. The state will be saved for every
step.
v0.2.22
Added
- tetragono: Add
save_configuration_file
option forgm_run
andap_run
in tetragono shell, which saves sampling
configurations during gradient descent. - tetragono: Add list as interface for
rename_io
intetragono.common_tensor.tensor_toolkit
. Original argument such
as{0: a, 1: b, 2: c}
can be written as[a, b, c]
.
Deprecated
- tetragono:
save_state_interval
option forgm_run
andap_run
is deprecated. The state will be saved for every
step ifsave_state_file
was set in the future.
Removed
- tetragono: The original function name
create
to create lattice is removed, which was deprecated in v0.2.18. The
new function name to create lattice isabstract_lattice
. - tetragono:
_owner
of Configuration for sampling lattice and ansatz product state is removed, useowner
instead.
Fixed
- TAT.hpp: Fix an internal compiler error for some old compiler, caused by the feature: fusing edges during tracing.
v0.2.20
Added
- tetraku: Add models data and ansatzes data into an individual package named
tetraku
. - tetragono: Configuration use
owner
to get the owner sampling lattice object of this configuration object, instead
of the previous_owner
. - TAT.hpp: Add fusing edges support when
trace
a tensor, to keep the consistency with functioncontract
. - TAT.py: Add fusing edges argument binding for function
trace
of the tensor.
Changed
- tetragono: Rename multiple product state to ansatz product state, to avoid the ambiguous abbreivation. Rename all
mp_xxx
toap_xxx
in tetragono shell. - TAT.hpp: Two new internal names used by user customed name type are added:
Trace_4
andTrace_5
. For the simple
internal name usage, two new default internal names are added:Default_3
andDefault_5
.
Deprecated
- tetragono:
_owner
of Configuration for sampling lattice is deprecated, useowner
instead.
Fixed
- TAT.hpp: Fix a bug in windows platform when copying an edge with fermi symmetry.
v0.2.19
Added
- tetragono: Add a new command
gm_hamiltonian
to replace the Hamiltonian of the existent sampling lattice. - tetragono: Add
conjugate_gradient_method_error
option forgm_run
andmp_run
in teragono shell. The conjugate
gradient will stop ifconjugate_gradient_method_step
reached ORconjugate_gradient_method_error
reached. Set
conjugate_gradient_method_error
to0.0
to skip error checking or setconjugate_gradient_method_step
to-1
to
skip step checking.
Changed
- lazy: Using a manual stack to run the recursion now, to avoid the recursion depth limit.
Fixed
- tetragono: Fix a problem when calling
gm_data_load
in tetragono shell. - tetragono: Fix a bug in calculating the natural gradient of a complex tensor network state.
- tetragono: Fix a bug in calculating the expect and the deviation in the ergodic sampling with subspace restricted.
v0.2.18
TAT 0.2.18
v0.2.17
TAT 0.2.17