My name is Kevin Vecmanis. I'm a professional engineer, CFA candidate, machine learning practioner, and full stack developer. VanAurum is the convergence of my two passions: finance and machine learning. In addition to the member-based analytics on this site, I also offer fee-based consulting services. Have a tough financial analysis problem that you think I can help you solve? Need python development support? Maybe you need an experienced engineer to help your development team? I can help.
I have experience with numerous coding technologies in a production environment. Here are some of the technologies I used to build this site:
Below you'll find a portfolio of Python code samples representing some of my other skills and interests.
By Kevin Vecmanis
In this installment I demonstrate the code and concepts required to build a Markowitz Optimal Portfolio in Python, including the calculation of the capital market line. I build flexible functions that can optimize portfolios for Sharpe ratio, maximum return, and minimal risk.
In part 2 of this series, I demonstrate the effect that Bitcoin would have had when introduced to the portfolio optimization algorithm. We do a comparative study of the optimal portfolio including Bitcoin up to the price peak in December 2017, and again for the full price history to present day.
In part 3 of this series, I tie together the concepts from part 1 and 2 into a unified class structure called 'PortfolioOptimizer'. This class can accept any number of asset tickers, and for any given portfolio size provide you with the Markowitz optimal portfolio based on Sharpe ratio, return, and volatility.
In part 4 of this series I introduce the problem of portfolio rebalancing. I demonstrate the class 'RebalancingAlgorithm', which is a comprehensive analysis tool that accepts optimal portfolio parameters, runs a Monte Carlo simulation to test a rebalancing algorithm, and provides a comprehensive report on how the optimal portfolio should be managed and rebalanced going forward to optimize returns and minimize costs.
In this post I demonstrate the code required to fetch data from a data provider, build trade conditions, and back-test trading strategies that account for trading costs, slippage, the availability of fractional units, and other parameters.
In this post I show how to build an all-in-one algorithm using Hyperopt that automates data preprocessing, feature selection, and hyperparameter tuning for the Numerai dataset. This is a definite guide that walks through the tournament structure and presents a scalable algorithm that will allow you to build top-tier models out of the gate.
Most robo-advisors offer the exceptional benefit of low (or zero) trading costs and fractional share ownership. In this post I perform statistical testing of two portfolios: a portfolio with robo-advisor-like benefits, and the self-directed portfolio that incurs trading costs and can only purchase whole units. This is done using the two Python classes that we built-up in our series on Algorithmic Portfolio Management.
In part 1 of this series I introduce the Hyperopt library in sklearn. The Hyperopt library is a remarkably powerful optimization algorithm and can be used to tune machine learning algorithms faster than grid search and randomized search methods.
In part 2 of this post we extend the concepts further and demonstrate how Hyperopt can be applied to simultaneously optimize multiple machine learning algorithms working towards the same learning problem.
In part 1 of this series I introduce the math behind the Black-Scholes-Merton (BSM) pricing model and how they can be encoded in Python. We then run valuation examples using real-world option pricing data for GDX.
In part 2 of this series we demonstrate how to calculate and analyze option "greeks" in Python. We then do an analysis demonstrating the effect that changes in various parameters have on option prices.
In this post I show how to build and interact with a SQL database in Python.
I'm a visual learner - and at times machine learning concepts can be difficult to grasp intuitively. I find visualizing the output of algorithms helps me develop a better understanding of how they work. In this post I demonstrate, visually, the effect that each XGBoost parameter has on the decision boundaries in the model.
In this post I repeat the experiment done in our post on XGBoost, but for Support Vector Machines. I use an analogy of separating chocolate M&M's in an attempt to intuitively explain how the algorithm is functioning. The visual effect of the different kernels is interesting.
In part 1 of this post I demonstrate how to build a spot-checking algorithm that can evaluate a basket of machine learning algorithms on scaled and un-scaled data.
In part 2 of this series we're going to create a new spot-checking algorithm using Hyperopt. In the first part of this series our spot-checking algorithm used the default configuration in sklearn for each algorithm and returned the score of the top performing algorithm. By integrating Hyperopt into the spot-checking algorithm we can perform a quick, intelligent search of each solution space and then return the best result that hyperopt determines after parameter tuning for each algorithm.