-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathDESCRIPTION
98 lines (98 loc) · 2.33 KB
/
DESCRIPTION
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
Package: markovDP
Title: Infrastructure for Discrete-Time Markov Decision Processes (MDP)
Version: 0.99.0
Date: 2024-08-29
Authors@R:
person("Michael", "Hahsler", , "mhahsler@lyle.smu.edu", role = c("aut", "cph", "cre"),
comment = c(ORCID = "0000-0003-2716-1405"))
Description: Provides the infrastructure to work with Markov Decision Processes
(MDPs) in R. The focus is on convenience in formulating MDPs, the support of
sparse representations (using sparse matrices, lists and data.frames)
and visualization of results. Some key components are implemented in
C++ to speed up computation. Several popular solvers are implemented.
License: GPL (>=3)
URL: https://github.com/mhahsler/markovDP
BugReports: https://github.com/mhahsler/markovDP/issues
Depends:
R (>= 3.5.0)
Imports:
fastmap,
foreach,
igraph,
lpSolve,
Matrix,
MatrixExtra,
methods,
progress,
Rcpp,
stats
Suggests:
doParallel,
gifski,
knitr,
rmarkdown,
testthat,
visNetwork
LinkingTo:
Rcpp
VignetteBuilder:
knitr
Classification/ACM: G.4, G.1.6, I.2.6
Copyright: Copyright (C) Michael Hahsler.
Encoding: UTF-8
Roxygen: list(markdown = TRUE)
RoxygenNote: 7.3.2
SystemRequirements: C++17
Collate:
'AAA_check_installed.R'
'AAA_colors.R'
'AAA_foreach_helper.R'
'AAA_nodots.R'
'AAA_package.R'
'AAA_progress.R'
'AAA_sample_sparse.R'
'AAA_shorten.R'
'AAA_which_max_random.R'
'Cliff_walking.R'
'DynaMaze.R'
'MDP.R'
'MDPE.R'
'Maze.R'
'Q_values.R'
'RcppExports.R'
'Windy_gridworld.R'
'absorbing_states.R'
'accessors_transitions.R'
'accessors_reward.R'
'accessors.R'
'act.R'
'action.R'
'action_state_helpers.R'
'available_actions.R'
'bellman_operator.R'
'check_and_fix_MDP.R'
'convergence_horizon.R'
'find_reachable_states.R'
'greedy.R'
'gridworld.R'
'policy.R'
'policy_evaluation.R'
'policy_evaluation_LP.R'
'regret.R'
'reward.R'
'round_stochchastic.R'
'sample_MDP.R'
'sample_MDPE.R'
'solve_MDP.R'
'solve_MDP_APPROX.R'
'solve_MDP_DP.R'
'solve_MDP_LP.R'
'solve_MDP_MC.R'
'solve_MDP_SAMP.R'
'solve_MDP_TD.R'
'sparse_helpers.R'
'transition_graph.R'
'unreachable_states.R'
'value_function.R'
'visit_probability.R'
'zzz.R'