Vom 15. bis 16. Juni 2022 findet die Open Data Science Conference in London statt. Oliver Zeigermann ist für dich mit einem Workshop vor Ort.
Neural networks are powerful approximators for any function. However they do not have the slightest idea of common knowledge of the world which often makes them fail miserably, especially when extrapolating to areas not covered by training data.
We, as human beings have that knowledge about the world and our domains of expertise, allowing deep learning models to become much more robust and even to extrapolate. But how do we encode this?
Based on code examples we will go through the known methods including choosing the right loss, forcing sparsity, choose good dimensions, lattices, types of network layers, and – last but not least – augmented training data.
None of the techniques shown are new and you might already know a good chunk of them (probably not all of them, though), but maybe you have not looked at them from the perspective of setting priors for your deep learning by encoding the world knowledge you have.
Background Knowledge: Attendees should have trained neural networks either in TensorFlow or a comparable tool like Pytorch or JAX.