OneRagtime Data Scientist Overview
OneRagtime is a next-generation venture capital fund-as-a-platform specialized in sourcing, financing and scaling early-stage tech startups from across Europe. Our fully digitalized investment process offers our exclusive investor community the freedom and flexibility to choose how they invest.
To enable the model, we have built an internal tech and product team that manages three key products: our client-facing Platform, our Core internal investment management tool, and our DeepDive that aims to leverage data to help source and analyse deal flow.
The Data Science team at OneRagtime builds and manages the Deep Dive, one of our three critical products that drives our data analytics.
The Data Science team has deep technical expertise and works with the product and business teams to leverage and business expertise and customer insights. We are engineers, technology aficionados and experts. We have exciting opportunities for you to innovate, influence, transform, inspire, and grow within our organisation and we encourage you to apply. Do you have it in you?
OneRagtime’s Data Science team is looking for a passionate, creative, analytical and experienced senior data and applied scientist to work on the DeepDive.
The DeepDive is a data scraping and machine-learning data analytics tool that enables our Deal team to detect, score, and collect data on startups in order to facilitate investment decisions.
Our team plays a key role in providing data and analytics within OneRagtime and owns the end to end decision sciences charter that includes:
- Bringing the relevant data together to deliver high value business scenarios.
- Creating key metrics, reports, and dashboards to support business decisions and improve user experience.
- Building advanced analytical models (behavior segmentation, churn prediction, recommendation engine, etc.) that spans over investment, deal-flow, and such similar processes of venture capital operations.
- Applying AI/ML to address strategic business requirements.
This position is for a Senior Data & Applied Scientist within the Data team at OneRagtime. This is a very visible role driving key insights and building data driven products. We are looking for someone that is:
- Curious to explore big data and unveil insights to help our business team make investment decisions
- Excited to build data driven models to predict hard and soft indicators of key success factors
- Interested in the investment and start-up ecosystems and keen on learning more about Venture Capital
- Driven, self-directed, entrepreneurial, and focused on delivering the right results
- Willing to tackle hard problems in innovative ways
- Comfortable with written and oral communication
- Mine massive datasets for identifying opportunities, stitch data from multiple sources, and create connected datasets through probabilistic joining.
- Develop horizontal enterprise data analytics and AI/ML products.
- Work on data engineering problems to architect and develop operational models that run at scale.
- Define appropriate business metrics to measure the impact of the solution.
- Design and conduct rigorous experiments and evaluate results.
- Assume a technical leadership role to design, prototype, implement, test, deploy, monitor, and maintain Data Analytics/Machine Learning solutions.
- Be challenged writing scalable, distributed and efficient components.
You will have ample opportunity for cross group collaboration, technical product definition and applied research.
- You should communicate well with technical and non-technical audiences and contribute modelling expertise as a team player.
- Connect findings and recommendations to business initiatives and collaborate with key stakeholders at various management levels.
- Experienced researcher (obtaining a Ph.D. degree in statistics, operations research, applied mathematics, computer science, or other engineering disciplines with a strong focus on data science related academic and industrial research (working on end-to-end data science pipeline: problem scoping, data gathering, exploratory data analysis, modelling, insights, visualisations, monitoring and maintenance).
- Demonstrated record of publications in machine learning, AI, computer science, statistics, applied mathematics, data science, or related technical fields with demonstrated first-author publications at peer-reviewed AI conferences (e.g. NeurIPS, CVPR, ICML, ICLR, ICCV, and ACL)
- Experienced user of machine learning/data mining algorithms (linear/logistics regression discriminant analysis, bagging, random forest, Bayesian model, SVM, neural networks, etc.) and working knowledge of machine learning related libraries such as scikit-learn, scipy, numpy, R, NetworkX, Spacy, and NLTK etc. along with familiarity of deep learning algorithms and workflows, with development experience using Torch, Caffe, MXNet, TensorFlow.
- Strong software development skills with proficiency in scripting to enable ETL development along with experience in developing and debugging in Python or R. Familiarity with Linux/OS X command line, version control software (git) etc.
- Experience and familiarity with relational databases such as PSQL. Experience working with Hadoop, Spark, Hive, Cassandra, Kafka and NoSQL databases a plus.
- Ability to use the cloud services for creating data driven micro-services: Knowledge of AWS, Cloud, Google Cloud Platform etc.