Re-architecting content review system to support tens of thousands of human and algorithmic reviewers. Whereas the previous system was built around one review job per piece of content, the new system is being designed around a single content model for each versioned piece of content, supporting zero, one, or more jobs as required. This permits the system to dynamically choose the best strategy for performing content review given what information about the job is already known, what needs to be known, and what has changed.
Technologies: JDK 1.8, PHP/HHVM, proprietary Facebook tools and frameworks.
Hands-on role serving as the Cloud Architect for the My Cloud Home NAS product. My Cloud Home is a budget-friendly NAS-like device coupled with a cloud ecosystem that blends private, on-premise content storage with cloud features like metadata aggregation, metadata driven search and content discovery, content sharing, and automated content transformation. I worked with the team to move from a primarily Java + Spring + Tomcat stack to a mostly serverless stack, and replacing many complex REST calls with a simple event-driven asynchronous messaging based system utilizing AWS IoT. In additional, I spearheaded the initiative to implement a next generation metadata system with mechanisms for priority optimized and cost efficient metadata extraction/production, storage, and access.
Technologies: JDK 1.8, AWS (EC2, ECS, DynamoDB, RDS, Lambda, and IoT), RESTful web services, GIT, Maven, and Gradle.
Spearheaded and lead a major re-architecting of Apple's Video Encoding pipeline system. Before I joined the team, each software release was met with breakage of major functionality, the code was extremely difficult to enhance as business logic was sprinkled inconsistently throughout the layers of the system, and only black-box testing was possible. I identified the various phases of system execution and started putting in place mechanisms to formalize the execution requirements of each phase. As part of this new design, I ensured that each phase had well-defined entry and exit points, was able to rely on the work performed by previous phases, and could focus on adding substantial value without contributing to system complexity. The end result was a system that was easy to comprehend, in which code constructs matched the terminology used by the team and customers, in which modules followed well-understood contracts, and in which real unit testing was possible and straightforward to implement and maintain.
Technologies: JDK 1.8, MongoDB, RabbitMQ, Oracle RDBMS, WebObjects, Guice, Guava, GIT, Maven, ZooKeeper, Docker, RESTful web services, and BASH shell scripting.
I followed my previous VP of Engineering from Zuora to a company he co-founded in August 2012 named Tinker. I was responsible for the architecture of a low latency transport between asymmetric devices composed primarily of mobile phones as well as back end servers, all connected for the purposes of creating a new type of cooperative and distributed computing mesh. Unlike the traditional model of client server computing where a single request is matched with a single response of perfectly consistent data, this model employed a publish/subscribe model in which a collection of services independently work to the best of their ability on small responses to greater problems. The messaging platform then helped subscribers aggregate these partial and variable fidelity responses into actionable information; information that is expected to continuously improve over time.
In addition to the platform, I also owned the infrastructure. All software was run out of the Amazon AWS environment on EC2 instances with a lot of custom scripting and deep Maven integration to dynamically mount EBS volumes, startup code, perform routine backups, and engineering services such as a GIT code repository, Maven package repository, Jenkins continuous integration server, and JIRA issue management.
Technologies: JDK 1.7, MongoDB, Berkeley DB, Amazon AWS/EC2/S3, Netty, TCP sockets, SSL, JCE, Java NIO, WebSockets, Objective C, GIT, Maven, Jenkins, JIRA, and BASH shell scripting.
Acted as Product Owner, team lead, and manager for the Zuora platform team focused on executing a new platform vision for Zuora's SaaS billing offering. Role included interviewing and hiring all Java development members for all of engineering (even those outside of the platform team), acted as the sole driver integrating the CEO, product management, development, and IT infrastructure teams, and was the visionary and architect of Zuora's core infrastructure. The role was very hands-on and I evenly divided my time between management and development duties.
Technologies: Java 1.6 Standard Edition (J2SE), metadata driven modeling and computation, multi-threading and concurrency, MySQL 5.5, JDBC, query performance and optimization, Splunk, Apache ActiveMQ, JMS, distributed computing, JMX, JVM heap dump analysis, Java Cryptography Extensions (JCE), PCI compliance, Apache AXIS, JSON, REST, SOAP, XML, XSD, Apache Tomcat, servlets, JSP, Hibernate, Spring, and custom frameworks.
Top-down architected and developed from scratch one of the world's largest transcoding pipelines. When I started, Netflix had an inefficient and opaque encoding system that was prone to data corruption and asset mismatches. I developed a highly distributed, transparent, and robust workflow engine, toolset, and process capable of transcoding assets across thousands of Amazon cloud instances, a specialized local encoding farm, and the Microsoft Azure cloud.
Technologies: Core Java, custom Java class loader, XML configuration and metadata transports, custom SSL over TCP transport, custom Amazon S3 transport optimized for fault tolerance and efficiency with extremely large data sets, Amazon S3 for data storage, Amazon EC2 cloud computing, Microsoft Azure VM role cloud computing, JCE cryptography, JDBC, and Oracle relational database.
Initially responsible for Greystripe's Java based Ad Server and J2ME mobile infrastructure, I soon took ownership of all non-web components of the product offering. Notably, I implemented a robust, replicating, binary logging mechanism used to record and process revenue generating events, an extremely efficient mechanism for dynamically inserting customized data and assembling downloadable assets on-demand, reworked the custom client/server TCP transport for efficiency and flexibility while maintain backwards compatibility, consolidated all financial systems and data, greatly simplified Greystripe's byte-code manipulation system, and made numerous performance and life-cycle enhancements to the Ad Server.
Technologies: Core Java, Java byte-code development, assembly, and re-assembly, TCP server and transport, J2ME infrastructure, J2ME application development, Hibernate, and MySQL/MyISAM and MySQL/InnoDB database engines.
Designed, developed, and implemented a custom Java Applet to provide customers with financial retirement advice based upon the Nobel Prize winning work of William Sharpe. The architecture I developed provided abstract support for widgets, font scaling, dynamic logging, dynamic message support, screen layout, crash and debugging support, parallel messaging and threading, and flow control facilities.
Technologies: JDK 1.0.2, AWT, Applets, Microsoft SQL Server.