Alquimics is the algorithm behind the MICS platform. It's being created through part handcrafting (a labour-intensive technique for programming that involves writing explicit rules and templates) part machine learning (a type of AI that learns to perform a task by analysing patterns in data).
From jump, the team is facing a defining question: Which parts of the platform's brain should be handcrafted and which should employ machine learning? Handcrafting is the more traditional approach, in which scientists painstakingly write extensive sets of rules to guide AI's understanding and assessments. Statistically-driven machine-learning approaches, by contrast, have the computer teach itself to assess impact by learning from data.
Machine learning is a superior method for tackling so-called classification problems, in which neural networks find unifying patterns in noisy data. But, when it comes to translating indicators characterising citizen-science projects into impact assessment, machine learning has a long way to go. As such, the team finds itself struggling, like the tech world at large, to find the best balance between the two approaches.
Handcrafting is unfashionable; machine learning is white-hot. To help Alquimics automatically generate impact assessments, the team will have at its disposal only 10-30 sets of instances of about 150 indicators.
So the MICS team is writing extensive assessment-guiding rules. The team created five "impact domains": environment, economy, governance, science, and society. The MICS system is being engineered to know the core elements of each of the five domains and can bounce around among them. And the team is dividing its platform's brain into a committee of smaller algorithms, each with a speciality of its own.
Impact assessment is a daunting challenge. It is especially tough for a machine-learning system because there usually isn't a verifiably correct way to assess impact. Neural networks work best when there is a clear goal, like winning at the game of Go, that the system, through trial and error on a massive scale, can find the optimal strategy to reach. Impact assessment has no well-defined goal.
Initially, the MICS team is just guessing how much to weight each metric. But by the autumn a neural network will have learned to automatically rejigger the weights.
So the team is taking a fairly middle-of-the-road approach to mixing rules-based programming and machine learning into its system. The edge of the MICS platform with respect to existing systems is an interface and an interaction that people will enjoy. The system will not only ask questions about indicators but will provide interesting, conversational feedback, so the users can learn more about their projects and how to improve them to achieve a more substantial impact.
To achieve this, the team is handcrafting plenty of feedback language: "So tell me, is the project more about science or engagement?", and the like.