-
Notifications
You must be signed in to change notification settings - Fork 5
/
Copy pathparams.json
6 lines (6 loc) · 16.6 KB
/
params.json
1
2
3
4
5
6
{
"name": "Drip-statistical-learning",
"tagline": "DRIP Statistical Learning",
"body": "\r\n<p align=\"center\"><img src=\"https://github.com/lakshmiDRIP/DRIP/blob/master/DRIP_Logo.gif?raw=true\" width=\"100\"></p>\r\n\r\n**v2.53** *12 November 2016*\r\n\r\nDRIP Statistical Learning is a collection of Java libraries for Statistical Evaluation and Machine Learning.\r\n\r\nDRIP Statistical Learning is composed of the following main libraries:\r\n * Probabilistic Sequence Measure Concentration Bounds Library\r\n * Statistical Learning Theory Framework Library\r\n * Empirical Risk Minimization Library\r\n * VC and Capacity Measure Library\r\n * Covering Numbers Library\r\n * Alternate Statistical Learning Library\r\n * Problem Space and Algorithms Families Library\r\n * Parametric Classification Library\r\n * Non-parametric Classification Library\r\n * Clustering Library\r\n * Ensemble Learning Library\r\n * Multi-linear Sub-space Learning Library\r\n * Real-Valued Sequence Learning Library\r\n * Real-Valued Learning Library\r\n * Sequence Labeling Library\r\n * Bayesian Library\r\n * Linear Algebra Support Library\r\n\r\nFor Installation, Documentation and Samples, and the associated supporting Numerical Libraries please check out [DRIP] (https://github.com/lakshmiDRIP/DRIP).\r\n\r\n\r\n##DRIP Core Technical Specifications\r\n * [Asset Allocation Library](https://github.com/lakshmiDRIP/DRIP/tree/master/Docs/DRIPSpecification/AssetAllocation/AssetAllocation_v2.13.pdf)\r\n * [Fixed Income Analytics](https://github.com/lakshmiDRIP/DRIP/tree/master/Docs/DRIPSpecification/FixedIncome/FixedIncomeAnalytics_v2.47.pdf)\r\n * [Transaction Cost Analytics](https://github.com/lakshmiDRIP/DRIP/tree/master/Docs/DRIPSpecification/TransactionCost/TransactionCostAnalytics_v2.53.pdf)\r\n\r\n\r\n##DRIP Supporting Technical Specifications\r\n * [Spline Builder Library](https://github.com/lakshmiDRIP/DRIP/tree/master/Docs/DRIPSpecification/SplineBuilder/SplineBuilder_v0.82.pdf)\r\n * [Numerical Optimization Library](https://github.com/lakshmiDRIP/DRIP/tree/master/Docs/DRIPSpecification/NumericalOptimizer/NumericalOptimization_v2.05.pdf)\r\n * [Statistical Learning Library](https://github.com/lakshmiDRIP/DRIP/tree/master/Docs/DRIPSpecification/StatisticalLearning/StatisticalLearningLibrary_v0.80.pdf)\r\n * [Machine Learning Library](https://github.com/lakshmiDRIP/DRIP/tree/master/Docs/DRIPSpecification/MachineLearning/MachineLearningLibrary_v0.92.pdf)\r\n\r\n\r\n##Additional Documentation\r\n * [DRIP GitHub Source](https://github.com/lakshmiDRIP/DRIP)\r\n * [DRIP API Javadoc](https://lakshmidrip.github.io/DRIP/Javadoc/index.html)\r\n * [DRIP Release Notes](https://github.com/lakshmiDRIP/DRIP/tree/master/ReleaseNotes)\r\n * [DRIP Technical Specifications](https://github.com/lakshmiDRIP/DRIP/tree/master/Docs/DRIPSpecification)\r\n * [DRIP External Specifications](https://github.com/lakshmiDRIP/DRIP/tree/master/Docs/External)\r\n * User guide is a work in progress!\r\n\r\n\r\n##Samples (Statistical Learning - Need much, much more!)\r\n * [Efron Stein Bounds](https://github.com/lakshmiDRIP/DRIP/tree/master/org/drip/sample/efronstein)\r\n * [Custom Sequence Bounds](https://github.com/lakshmiDRIP/DRIP/tree/master/org/drip/sample/sequence)\r\n * [Covering Number Bounds](https://github.com/lakshmiDRIP/DRIP/tree/master/org/drip/sample/coveringnumber)\r\n * [Binary Classifier Bounds](https://github.com/lakshmiDRIP/DRIP/tree/master/org/drip/sample/classifier)\r\n\r\n\r\n##Samples (Numerical Support - Need much more!)\r\n * [Linear Algebra/Components](https://github.com/lakshmiDRIP/DRIP/tree/master/org/drip/sample/matrix)\r\n * [Closed Distribution Measure](https://github.com/lakshmiDRIP/DRIP/tree/master/org/drip/sample/measure)\r\n * [Empirical Distribution Measure](https://github.com/lakshmiDRIP/DRIP/tree/master/org/drip/sample/statistics)\r\n * [Search/Quadrature/Fourier](https://github.com/lakshmiDRIP/DRIP/tree/master/org/drip/sample/numerical)\r\n\r\n\r\n##Features\r\n\r\n###Probabilistic Bounds and Concentration of Measure Sequences\r\n####Probabilistic Bounds\r\n * Tail Probability Bounds Estimation\r\n * Basic Probability Inequalities\r\n * Cauchy-Schwartz Inequality\r\n * Association Inequalities\r\n * Moment, Gaussian, and Exponential Bounds\r\n * Bounding Sums of Independent Random Variables\r\n * Non Moment Based Bounding - Hoeffding Bound\r\n * Moment Based Bounds\r\n * Binomial Tails\r\n * Custom Bounds for Special i.i.d. Sequences\r\n\r\n####Efron Stein Bounds\r\n * Martingale Differences Sum Inequality\r\n * Efron-Stein Inequality\r\n * Bounded Differences Inequality\r\n * Bounded Differences Inequality - Applications\r\n * Self-Bounding Functions\r\n * Configuration Functions\r\n \r\n####Entropy Methods\r\n * Information Theory - Basics\r\n * Tensorization of the Entropy\r\n * Logarithmic Sobolev Inequalities\r\n * Logarithmic Sobolev Inequalities - Applications\r\n * Exponential Inequalities for Self-Bounding Functions\r\n * Combinatorial Entropy\r\n * Variations on the Theme of Self-Bounding Functions\r\n\r\n####Concentration of Measure\r\n * Equivalent Bounded Differences Inequality\r\n * Convex Distance Inequality\r\n * Convex Distance Inequality - Proof\r\n * Application of the Convex Distance Inequality - Bin Packing\r\n\r\n###Statistical Learning Theory - Foundation and Framework\r\n####Standard SLT Framework\r\n * Computational Learning Theory\r\n * Probably Approximately Correct (PAC) Learning\r\n * PAC Definitions and Terminology\r\n * SLT Setup\r\n * Algorithms for Reducing Over-fitting\r\n * Bayesian Normalized Regularizer Setup\r\n\r\n####Generalization and Consistency\r\n * Types of Consistency\r\n * Bias-Variance or Estimation-Approximation Trade-off\r\n * Bias-Variance Decomposition\r\n * Bias-Variance Optimization\r\n * Generalization and Consistency for kNN\r\n\r\n###Empirical Risk Minimization - Principles and Techniques\r\n####Empirical Risk Minimization\r\n * Overview\r\n * The Loss Functions and Empirical Risk Minimization Principles\r\n * Application of the Central Limit Theorem (CLT) and the Law of Large Numbers (LLN)\r\n * Inconsistency of Empirical Risk Minimizers\r\n * Uniform Convergence\r\n * ERM Complexity\r\n\r\n####Symmetrization\r\n * The Symmetrization Lemma\r\n\r\n####Generalization Bounds\r\n * The Union Bound\r\n * Shattering Coefficient\r\n * Empirical Risk Generalization Bound\r\n * Large Margin Bounds\r\n\r\n####Rademacher Complexity\r\n * Rademacher-based Uniform Convergence\r\n * VC Entropy\r\n * Chaining Technique\r\n\r\n####Local Rademacher Averages\r\n * Star-Hull and Sub-Root Functions\r\n * Local Rademacher Averages and Fixed Point\r\n * Local Rademacher Averages - Consequences\r\n\r\n####Normalized ERM\r\n * Computing the Normalized Empirical Risk Bounds\r\n * De-normalized Bounds\r\n\r\n####Noise Conditions\r\n * SLT Analysis Metrics\r\n * Types of Noise Conditions\r\n * Relative Loss Class\r\n\r\n###VC Theory and Capacity Measure Analysis\r\n####VC Theory and VC Dimension\r\n * Empirical Processes\r\n * Bounding the Empirical Loss Function\r\n * VC Dimension - Setup\r\n * Incorporating the Formal VC Definition\r\n * VC Dimension Examples\r\n * VC Dimension vs. Popper's Dimension\r\n\r\n####Sauer Lemma and VC Classifier Framework\r\n * Working out Sauer Lemma Bounds\r\n * Sauer Lemma ERM Bounds\r\n * VC Index\r\n * VC Classifier Framework\r\n\r\n###Capacity/Complexity Estimation Using Covering Numbers\r\n####Covering and Entropy Numbers\r\n * Nomenclature- Normed Spaces\r\n * Covering, Entropy, and Dyadic Numbers\r\n * Background and Overview of Basic Results\r\n\r\n####Covering Numbers for Real-Valued Function Classes\r\n * Functions of Bounded Variation\r\n * Functions of Bounded Variation - Upper Bound\r\n * Functions of Bounded Variation - Lower Bound\r\n * General Function Classes\r\n * General Function Class Bounds\r\n * General Function Class Bounds - Lemmas\r\n * General Function Class - Upper Bounds\r\n * General Function Class - Lower Bounds\r\n\r\n####Operator Theory Methods for Entropy Numbers\r\n * Generalization Bounds via Uniform Convergence\r\n * Basic Uniform Convergence Bounds\r\n * Loss Function Induced Classes\r\n * Standard Form of Uniform Convergence\r\n\r\n####Kernel Machines\r\n * SVM Capacity Control\r\n * Nonlinear Kernels\r\n * Generalization Performance of Regularization Networks\r\n * Covering Number Determination Steps\r\n * Challenges Presenting Master Generalization Error\r\n\r\n####Entropy Number for Kernel Machines\r\n * Mercer Kernels\r\n * Equivalent Kernels\r\n * Mapping Phi into L2\r\n * Corrigenda to the Mercer Conditions\r\n * L2 Unit Ball -> Epsilon Mapping Scaling Operator\r\n * Unit Bounding Operator Entropy Numbers\r\n * The SVM Operator\r\n * Maurey's Theorem\r\n * Bounds for SV Classes\r\n * Asymptotic Rates of Decay for the Entropy Numbers\r\n\r\n####Discrete Spectra of Convolution Operators\r\n * Kernels with Compact/Non-compact Support\r\n * The Kernel Operator Eigenvalues\r\n * Choosing Nu\r\n * Extensions to d-dimensions\r\n\r\n####Covering Numbers for Given Decay Rates\r\n * Asymptotic/Non-asymptotic Decay of Covering Numbers\r\n * Polynomial Eigenvalue Decay\r\n * Summation and Integration of Non-decreasing Functions\r\n * Exponential Polynomial Decay\r\n\r\n####Kernels for High-Dimensional Data\r\n * Kernel Fourier Transforms\r\n * Degenerate Kernel Bounds\r\n * Covering Numbers for Degenerate Systems\r\n * Bounds for Kernels in R^d\r\n * Impact of the Fourier Transform Decay on the Entropy Numbers\r\n\r\n####Regularization Networks Entropy Numbers Determination - Practice\r\n * Custom Applications of the Kernel Machines Entropy Numbers\r\n * Extensions to the Operator-Theoretic Viewpoint for Covering Numbers\r\n\r\n###Alternate Statistical Learning Approaches\r\n####Minimum Description Length Approach\r\n * Coding Approaches\r\n * MDL Analyses\r\n\r\n####Bayesian Methods\r\n * Bayesian and Frequentist Approaches\r\n * Bayesian Approaches\r\n\r\n####Knowledge Based Bounds\r\n * Places to Incorporate Bounds\r\n * Prior Knowledge into the Function Space\r\n\r\n####Approximation Error and Bayes' Consistency\r\n * Nested Function Spaces\r\n * Regularization\r\n * Achieving Zero Approximation Error\r\n * Rate of Convergence\r\n\r\n####No Free Lunch Theorem\r\n * Algorithmic Consistency\r\n * NFT Formal Statements\r\n\r\n###Problem Space and Algorthm Families\r\n####Generative and Discriminative Models\r\n * Generative Models\r\n * Discriminant Models\r\n * Examples of Discriminant Approaches\r\n * Differences between Generative and Discriminant Models\r\n\r\n####Supervised Learning\r\n * Supervised Learning Practice Steps\r\n * Challenges with Supervised Learning Practice\r\n * Formulation\r\n\r\n####Unsupervised Learning\r\n\r\n####Machine Learning\r\n * Calibration vs. Learning\r\n\r\n####Pattern Recognition\r\n * Supervised vs. Unsupervised Pattern Recognition\r\n * Probabilistic Pattern Recognition\r\n * Formulation of Pattern Recognition\r\n * Pattern Recognition Practice SKU\r\n * Pattern Recognition Applications\r\n\r\n###Parametric Classification Algorithms\r\n####Statistical Classification\r\n####Linear Discriminant Analysis\r\n * Setup and Formulation\r\n * Fischer's Linear Discriminant\r\n * Quadratic Discriminant Analysis\r\n\r\n####Logistic Regression\r\n * Formulation\r\n * Goodness of Fit\r\n * Mathematical Setup\r\n * Bayesian Logistic Regression\r\n * Logistic Regression Extensions\r\n * Model Suitability Tests with Cross Validation\r\n\r\n####Multinomial Logistic Regression\r\n * Setup and Formulation\r\n\r\n###Non-Parametric Classification Algorithms\r\n####Decision Trees and Decision Lists\r\n####Variable Bandwidth Kernel Density Estimation\r\n####k Nearest Neighbors Algorithm\r\n####Perceptron\r\n####Support Vector Machines (SVM)\r\n####Gene Expression Programming (GEP)\r\n\r\n###Clustering Algorithms\r\n####Cluster Analysis\r\n * Cluster Models\r\n * Connectivity Based Clustering\r\n * Centroid Based Clustering\r\n * Distribution Based Clustering\r\n * Density Based Clustering\r\n * Clustering Enhancements\r\n * Internal Cluster Evaluation\r\n * External Cluster Evaluation\r\n * Clustering Axiom\r\n\r\n####Mixture Model\r\n * Generic Mixture Model Details\r\n * Specific Mixture Models\r\n * Mixture Model Samples\r\n * Identifiability\r\n * Expectation Maximization\r\n * Alternatives to Expectation Maximization\r\n * Mixture Model Extensions\r\n\r\n####Deep Learning\r\n * Unsupervised Representation Learner\r\n * Deep Learning using ANN\r\n * Deep Learning Architectures\r\n * Challenges with the DNN Approach\r\n * Deep Belief Networks (DBN)\r\n * Convolutional Neural Networks (CNN)\r\n * Deep Learning Evaluation Data Sets\r\n * Neurological Basis of Deep Learning\r\n\r\n####Hierarchical Clustering\r\n\r\n####k-Means Clustering\r\n * Mathematical Formulation\r\n * The Standard Algorithm\r\n * k-Means Initialization Schemes\r\n * k-Means Complexity\r\n * k-Means Variations\r\n * k-Means Applications\r\n * Alternate k-Means Formulations\r\n\r\n####Correlation Clustering\r\n\r\n####Kernel Principal Component Analysis (Kernel PCA)\r\n\r\n###Ensemble Learning Algorithms\r\n\r\n####Ensemble Learning\r\n * Overview\r\n * Theoretical Underpinnings\r\n * Ensemble Aggregator Types\r\n * Bayes' Optimal Classifier\r\n * Bagging and Boosting\r\n * Bayesian Model Averaging (BMA)\r\n * Baysian Model Combination (BMC)\r\n * Bucket of Models (BOM)\r\n * Stacking\r\n * Ensemble Averaging vs. Basis Spline Representation\r\n\r\n####ANN Ensemble Averaging\r\n * Techniques and Results\r\n\r\n####Boosting\r\n * Philosophy behind Boosting Algorithms\r\n * Popular Boosting Algorithms and Drawbacks\r\n\r\n####Bootstrap Averaging\r\n * Sample Generation\r\n * Bagging with 1NN - Theoretical Treatment\r\n\r\n###Multi-linear Sub-space Learning Algorithms\r\n####Tensors and Multi-linear Sub-space Algorithms\r\n * Tensors\r\n * Multi-linear Sub-space Learning\r\n * Multi-linear PCA\r\n\r\n###Real-Valued Sequence Learning Algorithms\r\n####Kalman Filtering\r\n * Continuous Time Kalman Filtering\r\n * Nonlinear Kalman Filtering\r\n * Kalman Smoothing\r\n\r\n####Particle Filtering\r\n\r\n###Real-Valued Learning Algorithms\r\n####Regression Analysis\r\n * Linear Regression\r\n * Assumptions Underlying Basis Linear Regression\r\n * Multi-variate Regression Analysis\r\n * Multi-variate Predictor/Response Regression\r\n * OLS on Basis Spline Representation\r\n * OLS on Basis Spline Representation with Roughness Penalty\r\n * Linear Regression Estimator Extensions\r\n * Bayesian Approach to Regression Analysis\r\n\r\n####Component Analysis\r\n * Independent Component Analysis (ICA) Specification\r\n * Independent Component Analysis (ICA) Formulation\r\n * Principal Component Analysis\r\n * Principal COmponent Analysis - Constrained Formulation\r\n * 2D Principal Component Analysis - Constrained Formulation\r\n * 2D Principal Component Analysis - Lagrange Multiplier Based Constrained Formulation\r\n * nD Principal Component Analysis - Lagrange Multiplier Based Constrained Formulation\r\n * Information Theoretic Analysis of PCA\r\n * Empirical PCA Estimation From Data Set\r\n\r\n###Sequence Label Learning Algorithms\r\n####Hidden Markov Models\r\n * HMM State Transition/Emission Parameter Estimation\r\n * HMM Based Inference\r\n * Non-Bayesian HMM Model Setup\r\n * Bayesian Extension to the HMM Model Setup\r\n * HMM in Practical World\r\n\r\n####Markov Chain Models\r\n * Markov Property\r\n * Markov Chains\r\n * Classification of the Markov Models\r\n * Monte Carlo Markov Chains (MCMC)\r\n * MCMC for Multi-dimensional Integrals\r\n\r\n####Markov Random and Conditional Fields\r\n * MRF/CRF Axiomatic Properties/Definitions\r\n * Clique Factorization\r\n * Inference in MRF/CRF\r\n\r\n####Maximum Entropy Markov Models\r\n\r\n####Probabilistic Grammar and Parsing\r\n * Parsing\r\n * Parser\r\n * Context-Free Grammar (CFG)\r\n\r\n###Bayesian Analysis\r\n####Concepts, Formulation, Usage, and Application\r\n * Applicability\r\n * Analysis of Bayesian Systems\r\n * Bayesian Networks\r\n * Hypothesis Testing\r\n * Bayesian Updating\r\n * Maximum Entropy Techniques\r\n * Priors\r\n * Predictive Posteriors and Priors\r\n * Approximate Bayesian Computation\r\n * Measurement and Parametric Calibration\r\n * Bayesian Regression Analysis\r\n * Extensions to Bayesian Regression Analysis\r\n * Spline Proxying of Bayesian Systems\r\n\r\n###Linear Algebra Support\r\n####Optimizer\r\n * Constrained Optimization using Lagrangian\r\n * Least Squares Optimizer\r\n * Multi-variate Distribution\r\n\r\n####Linear Systems Analysis and Transformation\r\n * Matrix Transforms\r\n * System of Linear Equations\r\n * Orthogonalization\r\n * Gaussian Elimination\r\n\r\n\r\n##Contact\r\n\r\nlakshmi@synergicdesign.com\r\n",
"note": "Don't delete this file! It's used internally to help with page regeneration."
}