Sounds vaguely like my caffeine fueled musing: https://github.com/noizu/artificial_intelligence "3. Working rudimentary memory, and reusable modular networks.
So keeping in mind of the concepts from 1. & 2. and a large amount of computing power plus some caveats it should be possible to train modular deep networks, that can be dropped into existing solutions and capable of altering behavior/focus based on upstream signals, with minimal additional training, and without the need to alter the training on the core module. The training process however becomes much more involved.
The approach I would propose here is to,
Prepare a deep network in the usual manner. Define output sections. (These nodes must contain the make of the car in the image I am viewing) (those nodes must encode the color of the car, etc.). a. Break the output layer into regions that must contain specific data in it’s entirety. b. Train neural networks against those regions to answer or show specific data. Use these split off networks to reinforce learning. c. Once the network is sufficiently trained such that each of the split off networks correctly interprets the data to a sufficient degree re run the network and record the output of all of the split off networks and the intermediate layer they split from d. Repeat, breaking up large sub layers into smaller sections that can organize machine data into specific segments that can be routed to other components.