Startups Using R in Boston
Via their job posts and information submitted by startups themselves, these are the Boston R startups we've found.
Interested in other technologies? Browse or search all of the built-in-boston tech stacks we've curated.
Using computer vision and AI to provide a surgical visualization and robotics system for better imaging and diagnostic accuracy during surgery.
Land parcel search engine and intelligence platform, with agricultural data analytics.
Yahoo Pipes for big data. Drag-and-drop dataflow creation for easy custom analytics.
“Heathcare engagement analytics delivering insights designed to promote member and provider behavioral change.”
Computer vision for marketing and brand analytics, with search capabilities.
“A machine learning platform for data scientists of all skill levels to build and deploy accurate predictive models.”
LinkedIn for blue collar workers / recruiting solutions for companies.
Real-time admissions and discharge notifications link providers anywhere patients receive care.
Tech Stack Highlights
Spring Boot – We field a number of microservices on top of Spring Boot. Its convention-over-configuration design allows us to focus on business logic rather than plumbing. We’re particularly looking forward to the Spring team’s upcoming first-class support for Kotlin, which we’ve been gradually introducing as a safe, expressive alternative to Java 8.
React + Redux – We’ve built a highly interactive and engaging front-end using React and Redux. The resulting code is modular, easy to reason about, flexible, and composable.
Kafka – We use Kafka as our primary message bus. Unlike most “big data” technologies, Kafka has allowed us to scale without imposing a notable increase in complexity. In fact, becuase its append-only architecture allows us to view topic contents long after the message has been “consumed”, Kafka allows us to significantly improve monitoring and visibility over more traditional message buses (JMS, AMQP). We’re looking forward to experimenting with Kafka Streams as a lightweight alternative to standalone stream processing frameworks such as Spark.
Zeppelin – We use Apache Zeppelin to query, aggregate, and visualize data across a number of heterogeneous data sources, including MySQL, ElasticSearch, and S3. We write ‘notebooks’ in Scala and SQL to drive Spark in creating these visualizations. These notebooks can be ad hoc or shared, versioned, and parameterized.
NiFi – We use NiFi as an orchestration layer to manage real-time data flows in a simple scaleable way. The framework provides us with the ability to easily monitor the progress of messages as they move through the processing pipeline and to replay messages should it be necessary.
“Decision analytics” company. Using behavioral science & machine learning to improve sales, marketing, consulting outcomes.