Strix Devlogs

Blogging stuff about life, projects, etc.
Some devlogs too.
User avatar
dendiz
Site Admin
Posts: 115
Joined: Wed Oct 10, 2018 3:48 am

Strix Devlogs

Post by dendiz » Wed Oct 10, 2018 8:36 pm

---
title: "devlog #1"
date: 2018-05-29
---

summary
  • setup of a new project
I mentioned on my micro blog that I have a tendency to work on projects in cycles. These cycles are work on games/work on financial projects. I have noticed this
in the past 2 years. I started working on chess related projects, then the sudden crypto trading boom happened, and I started playing with trading projects. Then the crypto market crashed and I became less interested in the markets and started working on another TBS game. Now the Turkish Lira is crashing and I'm once again interested in the markets. So this time I want to make a project that analyzes US stocks and runs scanners on the price data to generate signals. I want the system to be like twitter where you follow companies and the signals appear in the companies timeline.

The architecture so far is pretty simple
stocktoot.png
stocktoot.png (361.98 KiB) Viewed 406 times
I get the stock data from IEX and cache it in Redis. N signal scanners will run on this data to generate signals and the signal results are stored in MySQL. There is a signal point system that is arbitrary at the moment. I assign a multiplier and a adder value to each signal that I will use later to generate the points. If the instruments generates signal points above a certain value it will emit a signal.

User avatar
dendiz
Site Admin
Posts: 115
Joined: Wed Oct 10, 2018 3:48 am

Re: Strix Devlogs

Post by dendiz » Wed Oct 10, 2018 8:39 pm

---
title: "devlog #2"
date: 2018-05-30
---

summary
  • UI design
  • technicals
  • performance
now that the foundation for getting the data and caching properly was complete
and I had a couple of scanners going I started on the technical indicator calculations. The first batch contains
  • price
  • price change percentage
  • ATR
  • ADX, and DI +/-
  • Long, medium and short term trends based on moving averages
  • 52 week high and low
  • average volume
At first I thought I'd only store 1 record per symbol containing the most
up-to-date data on the instrument, but then I opted for storing the history
of the technical data as well. I could be useful to browse them later.


When running the technical calculations I noticed that the Redis cache
retrieval code was still not fast enough. I also tweaked that code. Instead
of making N parallel requests to Redis per stock symbol and parsing and
storing them in the local cache, I do a single multi get to Redis, then
parse the JSON data in parallel and store them into the cache. This is
way faster (I didn't measure the exact amount). Even after this optimization
it was still not fast enough. I can't wait more than 10 seconds for the cache
warm-up. So I checked the next offender which is the JSON parsing. People
on the net did benchmarks on the GSON library performance and it looked like
it was one of the slowest libraries around. I thought that maybe using
org.json could be better without using any reflection but that is the only
library out there that's slower than GSON. Luckily the library that comes
with spring (Jackson) is one of the fastest so I switched to using that
and the results were amazing.

GSON parser took 133 seconds to parse ~6.000 items, while Jackson took 1.7s to
parse the same items.

After this improvement I could run the technical data without wasting time on the
application startup.


Another matter that was annoying was the NaN values that TaLib4J returns. I
didn't really expect to see a NaN value but some of the data from IEX for
some stocks contains no opening/closing prices or the stock didn't trade
at all some days so the prices are all the same. This led to NaN values in
the library. I sorted these out.

I also started on the stock data display page GUI design. Foundation CSS
provides a nice framework for the layout etc so I'm not using bootstrap.
stocktoot2.png
stocktoot2.png (33.06 KiB) Viewed 405 times

User avatar
dendiz
Site Admin
Posts: 115
Joined: Wed Oct 10, 2018 3:48 am

Re: Strix Devlogs

Post by dendiz » Wed Oct 10, 2018 8:40 pm

---
title: "devlog #3"
date: 2018-06-01
---

summary
  • UI design
A new month is here and time goes by fast. There is a theory about the
perception that the older you get to faster time seems to go by is due
to the fact that the percentage of time relative to your age gets smaller.
E.g a year when you are 5 years old is 20% of your age and is a long duration.
But when you are 40 it's only 2.5% of your age and it goes by faster.

So what's new? I've concentrated on the stock viewing page today. Every
parameter that I had hard coded during the design phase is now the
actual value it's supposed to be. So the main part of the stock view
page is complete. The Thymeleaf style does take a bit getting used to
but I have figured out everything I needed to do.

A nasty bug I came across was the dates of the technical and scans
was the date the user wanted to scan. Randomly I tried a date when testing
and I saw in the database that this was a weekend. So I fixed that by using
the last date in the data returned from the API.

Another related issue was that dates from the API were showing up as weekends.
This is due to the fact that the IEX API returns a string for the date
field and the JSON parser just uses the current timezone and this ends
up being a date before the date in the string. Easy fix, just set
the timezone to EST where the stock exchanges are.

User avatar
dendiz
Site Admin
Posts: 115
Joined: Wed Oct 10, 2018 3:48 am

Re: Strix Devlogs

Post by dendiz » Wed Oct 10, 2018 8:43 pm

---
title: "devlog #4"
date: 2018-08-10
---

summary
  • reboot the project
  • current status

Rebooting the project

A technical analyzer is one of those project I keep returning to. I had already laid down a nice foundation with TaLib4J for the technical
calculations so it's usually cleaning up the glue code that is changing with each new iteration. This time around I've taken a new approach
by building a team around the project to drive me to turn it into a product. I've created a multi module spring project with the following
modules: API, Databus, Engine, corelib. The API is the gateway for clients to access analysis data and takes care of user management.
The databus module is an API for the api module to access the data. The engine is a command line application that does the actual
calculations and the corelib is the module that contains common classes shared by the modules. After initial testing it turned out that
having the Databus as a separate service comes with severe performance penalties due to JSON data conversions etc, so I integrated the
databus module into the corelib module. You want to go service oriented until you realize that there are only 2 other services that
want to consume it and then you decide to integrate it back. Keeping the engine separate made sense though as it runs as a batch processor.


Current status

As of today there are 33 unique scanners, and I also run the 2 element combinations of these scanners for a total of 33c2 (528) scanners.
This produces around 6 M scan results for a 20 month worth of stock EOD data. I want to increase the combination count but I'm not sure
if the server I have in mind for production deployment can handle the amount. That's on my try this list and will report how it goes.
I have most of the basic user management features completed in the API, except for the integrations to payment and transactional emails.
I'm also adding performance evaluation for the scanners by means of running a test from the date the signal was generated going forward
and checking the price went up/down X ATRs confirming the signal. This calculation is affected by the amount of scan results so increasing
the combination count will have an impact on the performance calculation. I've asked a question on the math StackExchange forum about
calculating the conditional probability of 2 technical indicator but have yet to receive a satisfying answer. Being able to calculate a
reasonably (~%1 error maybe?) accurate approximation for this would mean that I do not have the actually run all the combinations to
get their scores. I'm also using an error rate calculation based on Z-tables to give a confidence interval on the scanner score.
A nice optimization I did was to keep all the stock OHLCV data in cache and use a binary search to query data between given dates instead
of hitting the DB for each time. In the earlier version I was actually keeping the raw OHLCV data in files but reading them into cache
takes longer than reading them from a DB plus using a DB also gives opportunities for querying in different ways which I need in the
future.
Yesterday I noticed that one the scanners used 4 conditions to check for a signal ma5 > ma26 , ma26 > ma50, ma50 > ma200, stoch < 20.
This led me to the idea that I should actually make each of these conditions a scanner on it's own and brute force my way through
all of the combinations to reach the ideal scenario for each symbol by calculating the score. Maybe the best results for a stock are
when ma5 < ma26, ma26 > ma50, ... because there was a short term fall in the price for the stoch to reach a low etc.
I also implemented an optimization for the score calculator yesterday. Pre-optimization I was looping each symbol, fetching the scan
results for each symbol and calculating the score for the scanners from those results. This was doing too much DB round trips. I changed
it to looping through each scanner combination and storing the results of the scans in a map keyed by it's symbol. This is 1 less loop
and less DB requests.

User avatar
dendiz
Site Admin
Posts: 115
Joined: Wed Oct 10, 2018 3:48 am

Re: Strix Devlogs

Post by dendiz » Wed Oct 10, 2018 8:46 pm

---
title: "devlog #5"
date: 2018-09-05
---


summary
  • current status
Current status

It's been quite a while since I wrote a devlog, and quite a lot has happened in the project in between. I'll try to merge the stuff from
the commit logs and stuff I posted on mastodon to give an overview. I've been coding the scanner for most of the month. We consolidated
the scanner into categories, with some constraints on the combinator based on categories, so that you don't have 2 scanners of the same
category combined. Also you don't want bullish/bearish scanner combinations. There are 70 scanners and it's hell to have to change anything
on the scanner interface. I've partially solved this problem by extending the scanners from an abstract class, but for a feature called
parameter customization I still have to go over almost all scanners and add their default parameters. I've also encountered a lot of
performance issues with scanner performance calculations. Doing the calculation based on last years data took quite a while when I was
using the scan results from the database, so I decided to truncate the scan results kept in the database to a months worth of data, and
do all scanner runs on a years worth of data online during the scanner performance calculations. Less DB round trips increased the performance.
There was also an issue that ran into a sort of N select problem, where the DB was queried in each iteration of the loop. I fixed this
by using an IN query to get all the stuff I needed before going into the loop.

I had started using MongoDB for raw data such as OHLCV data and scanner results, but I decided to get everything into MySQL as complex
queries on MongoDB are a pain. So I got rid of MongoDB for all modules in the project.

I have a couple of ideas for parameter recommendations for indicators based on the indicators values. I actually wanted to brute-force
my way though some parameter combos to find the best scoring combo for that symbol and indicator but that leads to an unmanagable number
of scans. Now I will try to recommend RSI/Stoch overbought/oversold parameters based on the number that actually was used as the turning
point in the charts.


There was progress on the web client as well. I setup a nice continuous integration server that deploys all pushed commits and I also
setup my router with DynDNS and port forwarding so that the team can use the test environment. I re-purposed my workstation as a Proxmox
host (something I had already done before) and I'm using my laptop as my workstation now. This kind of sucks because it's a weak weak
machine. I don't really want to spend money on a new work station right now, as I have received my EAD and I'm planning to start working
somewhere after I get this project live. So back to the web client, we have the main page almost setup, but since nobody in the team is
a designer or has any background related to design it doesn't look professional to me. This could be just a bias, I'm not sure. I put
out the idea of paying someone to design it, but it wasn't received well - probably because we can't visualize what we want and cannot
really tell the designer what he should do. The pricing page, login and registration pages exists and are functional, but not really
tested. Testing and product management is a weak point in the team.

So, product management for this project is actually quite simple: I expect wire frames, use cases, and some testing from our PM.
He has no experience, but I just can't understand why somebody can't simple research all this stuff and do it. I have to step in
at almost every step, and this is slowing us down and demotivating me, as I don't want to do this - that's the point of having a
product manager.


I've set a tight deadline to go live, per peter principle I want to keep everybody on their toes, but my current status on this is
yellow, that's why I'm also working on a B plan. The opportunity cost is just too high now that I have the EAD.

User avatar
dendiz
Site Admin
Posts: 115
Joined: Wed Oct 10, 2018 3:48 am

Re: Strix Devlogs

Post by dendiz » Wed Oct 10, 2018 8:50 pm

**summary**

* stuff done in September 2018
* Jenkins automation
* persistence problems
* return to single thread mode
* scan refactoring and parameter change support
* scan performance improvements
* java 10 adventures
* parameter recommendation tests
* hybrid client developments

Wow, September was a month that I would only describe as "I lost my humanity and became a beast". After checking the git commit logs I see 480 new commits, touching 26410 lines for the API and Engine, which is quite a bit. There is basically too much stuff that I have done to cover in detail so I'll just have to go over the most import changes for this month.

First off I want to start with the automation tasks. I was running the BFX, IEX sync tasks, scanner tasks via crontab on the test machine. This was OK for some time, but it wasn't getting notifications about failed jobs or any information about the task duration. So I moved all these periodic engine tasks to Jenkins. Another added bonus is it's way easier to manage the task pipeline (which task should run after which task, which can run in parallel) from Jenkins rather than a bunch of ad-hoc shell scripts. I also changed the development cycle from committing code to master to committing first into a development branch, and then asking Jenkins via Hubbot through slack to test and merge and push the changes. This keeps the master branch at a stable state all the time. G. requested that any pushes to the master branch of the web client repository be deployed immediately so I had to do some GitLab trickery to get that working, as GitLab doesn't allow that for unpaid accounts. But it can be done via web hooks.

The current flow of crons are looking like this
1-m79Makp8mOHomhro.png
1-m79Makp8mOHomhro.png (19.76 KiB) Viewed 403 times
Using spring boot is a MUST if you are developing with Java - the advantages it provides are countless, but it does come with it's own quirks. I had been getting "transaction manage not found - cannot remove entity" errors on some of the data sync services. The solutions on the internet were all about adding a transactional annotation to the service method and people had accepted those answers. But I guess something changed along the way in spring development as they did not work for me. The solution is was to add the annotation to the repository interface. Sometimes something so simple can eat up a lot of precious development time, but it's satisfying in the end when you solve it. As I was fiddling around with this I also decided to save the response JSON from the providers instead of parsing them into domain entities and saving those. Now instead of having 1 M daily data points, I have 8 K key value items and I process them in memory. I was caching the domain objects anyways so this spares some extra load on the database.


During development I wanted to get the engine results as fast as possible so I was parallelizing all the operations in the engine. I noticed along the way that "parallelStream" is not the answer to all of the question. I ran into cases where parallelizing would really screw up the cache, and serial processing with a good cache usage is way better (in terms of efficiency) than just using all the cores. I can get more bang for a core than I could when the processing was done in parallel. This decision was also in part due to the fact that I want to run the engine on the same machine as the API (yes, I am basically poor) and I can't have the engine hogging all the CPU and exhausting RAM. I can't also have the entire data set in memory (or most of it) and this tends to happen when threads are running at the same time. So with the serial processing using the cache in an optimal way, the scanning process takes 2.5 K seconds on a single core. Running in parallel took 600 seconds on 8 cores.

A prerequisite to serialize the scanners was to improve their performance. I installed the excellent YourKit profiler trail version (which by the way is very expensive - otherwise I would buy it) and tracked down the bottle necks to unnecessary object wrapping (using my own SuperList class with convenience methods for accessing elements, like getLast(N), getTail(..), etc.) and the parsing of string to dates and vice versa. After hunting these down and refactoring the code to get rid of extra layers and work, there was a 3x increase in speed which brings the processing time to acceptable limits.

An integral part of the system is the performance evaluator for scans based on past performance. I had coded this in a hurry and it was a bit disgusting, so I refactored this into it's own service. The functionality is the same but it's simplified and runs a bit faster.

I hate the verbosity of Java so I thought I'd look into what was going on in the Java world, which I hadn't done since the release of Java 8 and the streaming API. I found out that Java 10 was released March 2018 and finally had support for the "var" keyword, which meant less verbosity in assignments. I know it's kind of trivial but I wanted to give it a go, since spring boot it supposed to support Java 10. So I went ahead and updated the development and test environments to Java 10 and compiled the code. A couple of warnings of unsafe access in spring but otherwise everything seemed OK. That is until I tried running the API. Then a lot of exceptions about Redis not being happy with something (which I don't remembers, and could not find an easy solution for), so I reverted back to 8, which is way more stable. I have long lines of code, but at least everything works correctly :).

A great Idea that I had for the engine was parameter optimizations. RSI oversold is 80, but why? why not 60?
I guess the guy who invented it had success with 80 and kept it as a default and now everybody uses that. But wouldn't it be great if I could figure out the optimal number by scanning the past RSI turning points? That's what parameter optimization/recommendation is about. There are a LOT of combinations to calculate so this feature has to be selective on the symbols and the parameter values that it tries. I still have to bake this idea, but I did put down a PoC service that does this.

I also started with the mobile client development using PhoneGap and Framework 7 with Vue.js. Vue.js is pretty amazing - at last someone has come up with a good framework for JavaScript development. I used to be a Mithril person but the lack of templating and using that m() function was a bit annoying. Also the component system in Mithril is complicated and not easy as Vue.js. Framework7 looks OK'ish but there are some quirks with it too. I ran into a problem with the router, that would break the back button randomly. It's surprisingly difficult to find answers to question - I guess they don't have a large community so I'll probably drop F7 in favor Vuetify.

That was a long post, but I guess this is how it's gonna be if I do it monthly. I still want to do a write up about the team dynamics, and the frustration that I'm having with that.

User avatar
dendiz
Site Admin
Posts: 115
Joined: Wed Oct 10, 2018 3:48 am

Re: Strix Devlogs

Post by dendiz » Wed Oct 10, 2018 8:51 pm

summary
  • rants about team dynamics
Working with a team on anything is a problem in its own. One of the most difficult parts,
a part that has a lot of moving parts and delicate balances is cooperating with people.
I started working on Strix alone, then paused for sometime and did something else. Then I
resurrected the project and took on two other guys (old friends) to work with me on the project.
Let's call them G. and O. for short. I like bouncing of ideas off of other people and discussion lead to progress.
But the people you choose for this are of importance. It's not like I had a lot of options to choose from,
as I'm not hiring anyone and I need to be able to trust / get along with the people I work with.
I knew that O had no experience in developing a product or marketing. I knew the capabilities of
G were limited or to put it another way he worked slowly, but I believed or wanted to believe that
passion could overcome these - both the inexperience and the slow progress. I believed this because
this is how it has always been for me. If I don't know how machine learning works and I want to use it
for something, I will obsess over it until I can do what I need to do, or until I'm burnt out and cannot
continue anymore. At this point it means that I've lost interest in that particular subject and move on
to something else. If I'm not interested in something I would acknowledge this up front and not even
get into it - if I'm in, I'm all in.

Now it's been around a month and a half of development on the web client, and I can't accept that
we have not come along far at all, and that during the last meeting the time frames pronounced to complete
the rest are twice as long. It's so simple to me, it's a weeks work or maybe two to get this done. I had
announced from the very beginning that the time frame to get this out is limited, and yet I still see days
where there is no activity on the code repository. I just cannot bring my self to accept this. Then there is also
the issue of prioritizing trivial things that are cosmetic, and the issue of simple one-liner things that are easily
found in the documentations taking days. Use case stories not being written in detail, no holistic thinking, no
detailed thinking just putting down a mashup of 2, 3 different sites as an incoherent mess as wire frames.
Testing being done on only what was supposed to be done, no extra defect issues being opened,
no methodological way of testing documentation for the web client, no understanding of agility and the need for it.

I think I could go on venting for a long time, but I have reached a point where I'm about to throw in the towel
on the team - not the project. The major problem is that these people are my friends so I don't want to let them down too harshly.
I guess this is one of the reasons why you don't want to form friendships with the people you work with and keep it as just colleagues.
This is not the first time I've done this, but it's the first time I'm so frustrated about it. So what I'm doing for certain is continuing
on my own. There is no need to carry dead weight. But the question is how to do the transition?. I could be upfront tell my
frustrations (which I have done before once, telling the team that the progress is slow, more on this later) or I could just stop
developing the thing on the shared repository. One might think that I'm communicating with them about all these issues but I have
repeatedly told them, but spoken and written that progress was slow, testing was inadequate and the design looks unprofessional.
Trimming down everything due to slow progress on the client side, left us with a plan for a "read-only" version of the site. And this
version will take 2 more months to complete? No way. I started ranting again :)
So back to the transition options. I think I'm going to be doing a mix of both. I'll wait for the deadline we had set when starting the project,
and after that has passed I'll just take the repository offline. Letting go of my team will also mean that I will be taking on the whole
financial risk, so I have to also think about optimizations on that. I can't have 5 servers running the application, I have to reduce it.
I will have only a third of the marketing budget, so I have to come up with other ways of marketing. We already had a limited budget
and needed these ways, but O. never came up with anything creative. So if I'm gonna have to come up with it anyways,
I don't need him on the team.

The only thing I'll miss I guess was having something to talk about and discuss on Slack - even though these discussions would
sometimes be pointless, there were times they were productive.

User avatar
dendiz
Site Admin
Posts: 115
Joined: Wed Oct 10, 2018 3:48 am

Re: Strix Devlogs

Post by dendiz » Sat Oct 13, 2018 9:48 am

summary
  • more on team dynamics - resolution
I have been considering the situation of the team over the past week, and I also had a talk with both my team mates about the situation.
The talks were very positive and their attitudes during these talks convinced me that they are doing their best towards launching this.
I also realized that my expectations were too high of them, but this in itself is not a bad thing.

The worst part of this whole charade was that I decided to do a project with friends. This is a bad decision. Becoming friends with colleagues
is fine, but the other way around is not - but too late to change that. Anyways the gist of talks were motivational issues caused by the lack of design
as all humans will be deterred by an ugly site - which I believe too, but choosing between functionality and design and the number of features we had to
give up on one. In the end I convinced them to give up on the number of features, and introduced a new design with Vuetify. This fresh look boosted up
morale that's for sure. And I went extreme with trimming the features - the first version of the site is read-only. No user input at all including
login and user registration. It's no use to register to a site if you are not have personal data stored anyways. This seemed quite extreme when I first
said it out aloud, but now it makes so much sense. One interesting point I realized during this is that a clean look for the site has more impact on
perception and motivation than I would have considered. O. said that it felt like a hobby project after a I said that we should not be focusing on design
anymore - like an ugly project would not attract any users so it doesn't feel like a real project. I kind of under stand the view point but as in many things
it's important to find the optimal balance.

User avatar
dendiz
Site Admin
Posts: 115
Joined: Wed Oct 10, 2018 3:48 am

Re: Strix Devlogs

Post by dendiz » Fri Nov 02, 2018 9:48 pm

Well I’ve been offline for about 2 weeks due to a life changing event: My baby girl. The past 2 weeks have been adjusting to a new life style, new sleep cycle and new tasks. I’ve become a master diaper changer after changing almost 150 diapers :) I tried to set the launch date for techscan to be the 15th of October for a reason: I knew that I would not have the time for at least a couple of weeks after a new baby, so I was hoping that the rest of the team could continue with UI improvements and marketing while I was offline. Of course this did not happen, they have also been offline the whole time. I just can’t get my expectations met with these guys. Just before the arrival I was toying around with the idea of porting the engine code to python. My reasons were:
  • I like to procrastinate and experiment
  • The Java code base was getting out of hand with 30k lines of code (mostly boiler plate)
  • The memory consumption of the Java code is high
  • The application is single threaded so no point in not using python
  • TA-Lib has bindings for python
It was pretty easy porting the code, I’m complete with the major parts. Memory consumption decreased from 4G to 1G, code is so much more concise and less. I can work on it from my iPad with an SSH connection. I have easy low level access to the database so it’s faster and easier to do stuff in SQL. I’ve created this version of the engine on my own git server as I am not planning to share this. I think I’ll be continuing on my own from now on, as I believe the old version and the old project will be forgotten by the other members of the team in time.

User avatar
dendiz
Site Admin
Posts: 115
Joined: Wed Oct 10, 2018 3:48 am

Re: Strix Devlogs

Post by dendiz » Tue Nov 20, 2018 6:31 am

So I've been working on the engine in past 2 weeks every chance I get, which is not a lot these days. I've also had to adapt the
web client to some of the breaking API changes (mostly fields names, and simple structure changes). With major changes in the database structure, some of the old tables being merged into the key value table, the engine code is a bit clearer now. MYSQL should be able to handle the queries to the KV table with ease as it's properly indexed and the record cardinality is much lower because most of the data is stored in a JSON structure now. I implemented the top activity module, correlation and the news sync module. The correlation finder takes a long time as python is kind of slow when iterating over a lot of records and each symbol has to be checked against all other symbols to find a correlation. This made me want to switch to something faster but I resisted the urge as the correlation finder can be run maybe every other week and it's OK if it takes 12 hours to run. I also got rid of the old API code that was hogging memory thanks to spring + hibernate storing tons of classes and garbage in memory. I went with flask which is a simple micro framework for creating API's. Currently I create a new connection for each request to the database and I need to test if this will scale under the load. What I have read is that the old "connections are expensive" is now a myth with newer databases, but still the network overhead could prove this theory wrong. In the second half of this month I adjusted the web client code to the new API responses and fixed cosmetics here and there. I can probably say I ported all the old code to the new API with maybe a couple of features missing that I will add in the following days. A major change on the client was switching from the Google Charts JS library to static images to display the candle sticks. My initial thought was it would be good to offload the chart creation to the client to lessen the load on the server, but it this turned out to have 2 disadvantages: 1. slower mobile clients take forever to render the chart (my Samsung tablet). 2. a ton of charting data is transferred to the client which slows the page loading. So I struggled for a day with the excellent matplotlib for python to get a nice candle chart with a volume overlay and I think it turned out quite well.
Screenshot 2018-11-19 at 10.26.39 PM.png
Before this was completed I used a chart from Finviz as a placeholder and inspiration. I also managed to squeeze in the android client build by using an excellent plugin for Vue which was quite painless to setup. I side loaded the app on my phone and tablet and they seem to work great. After loading the apps I realized that some things like pull to refresh were missing. It's essential to convert to a mobile app and try it out to get a good feel for the user experience, even though I'm actually developing the client in a browser. My plan for the upcoming days are using the app to iron out some more user experience quirks, then I need to get into StockTwits integration and start with marketing stuff. The launch timing seems quite bad as the markets have taken a turn for the worse - or maybe people will be searching for opportunities in this turmoil and can use TechScan to seek out these opportunities?

Here is the latest look:
Screenshot 2018-11-19 at 10.31.13 PM.png

Post Reply