In an effort to understand the critical challenges facing neurotechnology dissemination, we posed two key questions for attendees to answer in the registration for this event:
Question 1:
Briefly describe what you consider to be the most critical challenge for next-gen neurotechnology dissemination.
Question 2:
What do you propose to be a path to address this challenge?
The following is a condensed summary of responses from attendees at the 2018 Kavli Futures Symposium on Next-Gen Neurotechnology for Research and Medicine.
Challenge of dissemination of research technology for academic labs
Lack of funding for continued maintenance of a project, resulting in innovative ‘one-off’ projects that are not developed to the point where they can be successfully adopted by outside labs. Also lack of expertise required to properly deploy the technology.
- Create additional funding mechanisms aimed at outreach, maintenance, and incremental improvements of promising innovations.
- Staff new positions in non-traditional roles for academic institutions, including hybrid schemes which could involve academia, non-profits, and even companies.
- Create “center-like” entities that engage in continued development and user support including adaptation of mature technologies to non-traditional users as well as provides centralized access. These centers could operate under a private/public model (Fund for Sustainable Neurotechnologies) where private entities, universities and philanthropies which would sustain the centers and subsidize access to technologies at cost.
- Partnerships, collaborations, and alignment among academic/institutional technology developers and companies active in the global neuroscience markets.
- Institutional support for long-term availability of key technologies, including user training and support. Career paths within (or adjacent) to academia for production/support engineers and scientists working on incremental improvements to key tech.
- centralized engineering support potentially paid by grant supplements to work on “last mile” issues that will move promising neurotech from one-off demos to reliable consumer-grade products.
Challenge of dissemination of medical/consumer technology
Lack of understanding of how to articulate the value proposition to investors and entrepreneurs, to get funding.
- Build early collaborations between researchers, developers, clinical and patient users and early interactions with regulatory agencies.
What applied neuroscience R&D priorities would be most value-building for society, what operating models are required to advance those goals, and what supporting neurotechnologies would be most enabling of those operating models?”
- We need to build a community of organizations and individuals who have the ability to mobilize resources in support of applied neuroscience R&D priorities. This includes government agencies, but also philanthropies, private investors, and industry stakeholders. Then we should work to build consensus among this group around the answers to the questions I posed above (“what applied neuroscience R&D priorities would be most value-building for society, what operating models are required to advance those goals, and what supporting neurotechnologies would be most enabling of those operating models?”). From there, a roadmap for mobilizing resources to advance these goals, along with opportunities for coordination by stakeholders, should become clear.
Of the many challenges I think that consumer/patient education and adoption is near the top. We need to think hard about how to make neurotech not scary to the general public.
- We should better understand how people make risk/benefit decisions related to neurotech and focus energy where we can win that battle with the consumer/patient.
- Help patients now with technologies that we have and use that to learn what really matters in the field which will be far more effective than projects that remain in the research study realm
For neurotechnology to be adopted, it is critical that it solve a problem well. Many technologies fail to be adopted due to lack of clinical efficacy or poor usability by patients. A meaningful neurotechnology would provide meaningful efficacy and be highly usable by patients. Making a solution usable today can be addressed; however, improving efficacy is a huge challenge and the pathway to get here is long. The upside of addressing efficacy has huge potential as current market penetration for neuro therapies are less than 20%, due in large part to insufficient efficacy of treatment.
- To address this challenge, it is critical to employ collaborative efforts to clearly define a disease of interest, including appropriate patient selection and assessment of commodities, and employ a targeted method of developing and validating both diagnostic measurements of disease and models of the physiological system over many different use conditions. This would allow for rapid optimization of a therapy to understand the solutions proposed prior to initiating a clinical study. Clinical studies are slow and expensive, so providing high confidence of successful completion of a study is of huge value.
Lack of Funding
Lack of funding to support a combination of affordability, quality and training to use new technology properly. Neurotechnology hardware is expensive and requires sophisticated training, but there is no real market to justify the investment into producing probes. On the other hand, individual labs do not have the financial or infrastructural resources to share their tools at scale. It is also hard to quantify costs to acquire the expertise required to properly deploy the technology.
- Support for centralized resources from the entire community. Dissemination centers can also be created at the universities leading development activities. The Neuronex program at NSF funded a very limited number of centers with capabilities that are already established, while the new tools remain stuck in labs.
- For each funded R01 there should be a separate dissemination budget that does not subtract from the research budget but is in addition.
Standardization
Lack of standardization and interoperability between various components that make up a system. In disseminating probe technology, we are often faced with considerable re-engineering on connectors and electronics. This limits the ability to have the technology disseminated widely, causes difficulty in reliable and reproducible results, and needlessly wastes precious resources.
- It would be tremendously beneficial to the field to have a set of standards agreed to that allow plug and play of various components.
Lack of standardized interfaces in electrophysiology. Without such interfaces, the barriers to adopting new technology can be prohibitive. The most important places where standardized interfaces could have a big impact include: – Connectors – Cables – Drivers – Software plugin interfaces – Data formats – Analysis code
- Create committees to survey the space of available tools, and make recommendations about which standards to adopt. Where standards are lacking, we need dedicated engineers to develop solutions. These efforts should be funded by gov’t agencies, or by individual labs paying an annual fee.
User’s implementation
- Standardization of neurotechnology characterization and validation in various brain regions and biological systems
Standards and focusing the efforts of the field
- Panel at meeting
Development of a standardized platform for use and user training in that platform
- Rigorous quantitative validation of the platform properties so that users can appreciate the significance of the platform properties.
Suitable multimedia training materials including very detailed step-by-step use protocol with troubleshooting decision tree, on-site hands on training and web-based forums to answer questions for users to search, learn from and discuss.
A critical unsolved challenge is how to develop standards for analyzing and reporting data collected with advanced recording technologies. Currently it’s the “wild west” in the literature, and this is to the great detriment of the field. This problem, if left unchecked, will only get worse in the coming years as the scale of recordings expands.
- For starters, the top neuroscience journals need to make it clear to the community that the current practice needs to change, and with their involvement a series of meetings should be organized to set out reasonable standards. This seems like a monumental undertaking but the more it’s delayed the more challenging it will be to ever reach a consensus. At the same time, training of neuroscientist undergraduates and graduate students needs to include computational and data analysis methods.
While technology should be made widely available at reasonable price through large-scale production, it may not constitute the bottleneck for dissemination. The critical point could be the standardization of data format and quality check, as well as acquisition conditions, and to some level the emergence of standard analysis pipeline for big data management which would keep track of the probabilistic aspect of analysis criteria.
- I believe it will be crucial that institutional stakeholders put dedicated efforts towards production and sharing of tools and analysis platforms. Emphasis should also be made on training through dedicated and well-supported training centers.
Technology Development (technical issues)
Lack of non-invasive, highly-localized tools for monitoring and activation/inactivation of brain areas in humans. Neurotech in routine use in humans is in general several generations behind that in use in animals. Routinely used electrodes for clinical purposes (such as DBS for PD, ECOG for language mapping, depth electrodes for seizures, DBS for depression etc) are decades behind in what they could be.
- Research on alternative methods to convert neuronal activity into energy modalities that can be monitored outside the brain.
- Find ways to offer ‘prototypes’ of new electrodes that can be used in humans in experimental ways relatively straightforwardly. In particular find ways to use materials etc and perform needed toxicity testing as a way to smoothen use of such electrodes under IRB approval in ways that will not require years of prelim animal work. Perhaps partner with FDA/NIH to provide a straightforward pathway to gain such approvals for existing prototypes. That way, proof of concept studies could be done more routinely, likely leading to interest by commercial entities.
- Develop technologies that utilize penetrant forms of energy such as sound waves and magnetic fields.
The most critical challenge for next-gen neurotechnology is to reach the objective of high resolution theranostic multimodal exploration of the human brain in pathologies such as neurodegenerative diseases or glioblastoma. Deciphering and treating the inaccessible brain in a non-lesional way is a major challenge for the academic and industrial research. Developing multimodal and miniaturized devices thanks to the power of nano-technologies and electronics as well as smart data mining is the technology challenge. Fast, equitable and safe accelerated translation is the other connected challenge. That means the development of an innovative translational methodology that translate innovation in a safer and accelerated way at the bedside. Anticipating ethics and societal dissemination as well as the mandatory regulatory industrial production as well as medico-economic validation is mandatory.
- To address this challenge, nano-electronics is a major path to fulfill the prerequisites of miniaturization and multimodal brain deciphering integrating electrical, molecular and multimodal imaging devices. Addition of therapeutic properties is also mandatory from electrostimulation to local drug delivery. Core technologies industrialized at low cost for the electronic industry outside the health sector provide major opportunities for such medical devices. It is mandatory to integrate from the beginning industrial, regulation, toxicology, societal and medico-economic investigations. In this field, we already exemplified this innovative methodology developing step by step a fast track translational methodology for human brain exploration and therapy.
Lack of translation of technologies that offer cellular scale, neurochemical detection/ intervention (hardware and wetware) to whole, behaving animal and human applications, across models.
- Solution: Create large consortia of labs that link efforts between developers (hardware/wetware/software) and testers / users to accelerate the development and testing loops to facilitate adoption buy the scientific and clinical communities
Need for integrated measurement of multiple modalities in the brain (neuronal activity, gene expression, anatomy).
- We propose to achieve this via a combination of molecular connectomics with in-situ recovery of neuronal activity and gene expression. One path to this exploits BARseq (for tracing long-range projections), 2-photon imaging (to recover activity), and in-situ sequencing (to resolve gene expression). Each of these technologies is currently capable of measurements on thousands of neurons in a single animal, and initial experiments on pairs of modalities (BARseq + 2p-imaging, BARseq + in-situ sequencing) suggest that they are fully compatible. This sets the stage for simultaneous large-scale recordings of neuroanatomy, gene expression, and neuronal activity in any mammalian brain. Dissemination of this technology is a major challenge, however, as it requires over $1M of equipment and specialized expertise in 2p-imaging, molecular biology, in-situ sequencing, and bioinformatics. We propose that broad-scale adoption of any such platform requires significant investment in automation, dedicated facilities, and novel funding models.
Extending lifetimes of devices implanted in the brain to those more practical for use in patient applications.
- More detailed and thorough understanding of what is happening at the biological-device interface and introducing mitigations for the response. For example, reducing size, introducing materials that are more conducive to preventing responses by the body that tend to affect the device performance.
Integrated technologies to excite and control multiple single neurons
- Multifunctional probes with optical, electronic, chemical capabilities.
Electronics for closed loop control among different domains
One of the most critical challenges for next-gen technology is to develop technology that enables highly multiplexed, spatiotemporal recording of neuronal pathways along multiple brain layers
- A path to address this challenge is to create a network between the experimentalist and technology developer so that technologies will be developed to best address neurobiology problems of interest and so that validation and troubleshooting of technology can be guided by real-time experimental data in biological systems.
Scaling to very large numbers of neurons
- Inexpensive open-source devices and collaborative sharing of knowledge
To understand diverse needs of the community regarding properties of receptors for neurotransmitters.
- Once we understand these needs more precisely, we can try to optimize the process through which we generate these receptors.
Technology Development (infrastructure)
Reproducibility and validation. Tools need to be robustly manufacturable and offer unprecedented capabilities. New tools need to be validated before they should be distributed widely. The methodology and initial results should be reproducible.
- Pre-register methodology and widely publicize results. Work with original PIs to ensure understand methodology, but also have independent evaluation.
- Creation of a hub where engineers, physicists, neurobiologists, chemists, data scientists etc. work together in a very integrated and close way. The hub needs to be supported by a large team of technicians/engineers (i.e., non-academics) who can help to smooth out the seams between the disciplines (e.g., take a newly developed tool by academic researchers and make it easier to use by neurobiologists, develop training courses on new tools, validate new tools on some “standard” neurobiology experiments).
The most critical challenge for next-gen neurotechnology dissemination is the technology validation in the multiple leading groups who are willing to early adapt the technology and demonstrate the usage and establish the initial protocol. The next challenge is to introduce the developed technology to the community and provide the hands-on training of how to use the technology, followed by the continuous supply of the technology in an affordable price.
- Eventually the technology should be provided by the private sectors as a product. It is important to facilitate the infrastructure and ecosystem in which the technology dissemination and translation can be organically flourished by natural selection and survival based upon the utility and merits. I think the foundation’s initiative to enforce or promote one winning technology would disturb the landscape of healthy ecosystem and is not sustainable. The NSF technology hub program or similar mechanisms can be a good path to address this challenge.
Fabrication and packaging. Next-gen neurotechnology may require integration of electronic, fluidic, and optical components with very high densities. Fabrication of these systems with high yields and compact overall form factors will require innovation in fabrication and packaging. Fabrication, assembly and integration may require a skilled workforce, and go beyond a one-off proof-of-concept demonstration.
- Leverage existing mature fabrication processes used for CMOS circuits and silicon photonics as well as the sophisticated assembly and packaging processes used for these applications.
- Coordinated development programs are one model. Examples from the particle physics community include significant technology programs for high-luminosity colliders. Beyond R&D, specialized fabrication and assembly, with a view towards dissemination, may require establishing dedicated centers.
Regulatory approval to make existing neurotechnology options for human application (research and clinical)
- Integrate research, clinical and engineering teams for streamlined applications for FDA
Lack of easily accessible distribution channels. Mostly solved for genetic tools (Addgene) and software (Github). More challenging for hardware and chemical tools.
- Better online locations for agglomerating information about how to access various neuroscience research tools.
These comments are based on the experience gained through the real-world experience, of distribution and disseminated of over a 1000 Neuropixels probes to about 40 labs in the US and Europe in 12 months from Q4’17-Q3’18. The most critical challenges we observed fall into the foll. three categories: 1. Training of graduate students/post-docs in the use and application of these advanced high density probes, including in surgery, spike sorting, etc and implementation in their research. 2. Continuous development, bug fixing and maintenance of the control and spike sorting software. 3. Continuous online/email based first line/second line support for hardware and software issues from the field.
- My proposal would be to set up an independent non-profit entity, with core funding from private foundations, like HHMI, Wellcome, Gatsby, Allen, Kavli, etc. with commitment for two, 5 year periods, with an annual review and major review after the first 5 year period. This would be similar in spirit as what the Linux Foundation, World Wide Web Consortium, Raspberry Pi Foundation do to support their ecosystems.
- 1. Chose a device/system that is already manufactured (and manufactured and ready to distribute in quantity) and just needs selling, training and support. Chose a governing committee that will volunteer to start. Capitalize by requesting loans from charities interested in this space. Find funding for startup costs: legal, license, infrastructure, staff: grants from charities, grants from NIH, markup of sold items. 2. Using the steering committee/board of directors: a. Solicit proposals for added devices/systems either i.) needing transition to manufacture or ii.) distribution. b. for i.) just sell them, for ii). find a contract scale up path (funded by ???) 3. Adjust as things progress. This is clearly a learn as you go exercise.
- Understanding the functioning of the brain that will be engaged by the forthcoming wave of advanced technology. For example, a deeper understanding of how visual information is encoded and translated into perception will be critical for the development of an effective cortical visual prosthetic. And some of this understanding can best be advanced by testing devices in human subjects and trained NHPs, combining neurophysiology and behavior studies to develop and refine neurotechnology and its implementation.
- Greater collaboration between neuroengineers, neuroscientists, and clinician scientists will be critical.
- One of the most critical challenge to disseminate neurotechnology is to ensure neurofoundry capability available around the world to manufacture reliable components and disseminate technology among research teams.
- Consider technology manufacturing capability as a part of the challenge. Prototyping is important, finding a way to industrialization is key and LETI tries to help doing that: refining technology process by taking into account performance and processability as well as cost and yield issues.
- A major challenge for neurotechnology dissemination is ensuring a robust environment for comparison of competing technologies. How to decide which technologies need big pushes, and which don’t, and when in an unbiased way.
- It is critical that a pathway exists for end-user engagement as alpha and beta testers
Technology Development (culture)
There is a gap between various critical disciplines such as neuroscience, system and device engineers, and computational computer scientists. Also neuroscientist and applied physics scientists.
- Establishing more interdisciplinary centers and institute to bridge this gap, more cross-field conferences, even designing new courses at the graduate level for this
Lack of incentives and rewards for the developers to disseminate
- Need to develop ways to reward for dissemination / engage academic leaders.
Need better tools for sharing, aggregating, and managing data sets across the community
- Standards development and commercial solutions for managing data and promoting open sharing
The most critical challenge is the impedance mismatch between neuroscientists and engineers. Most engineers are still not aware of the complexity of problems and they see neurotechnology development as a standard research task where they can directly apply their past approaches to easily solve problems. Most of the time they do not gain enough background on the subject and their problem solving attempts relying on many engineering assumptions fail easily. Every year so many different technologies are developed/published but what percentage of them are really translated into practical neuroscience use? Similar issues could be raised from neuroscientists’ perspective as well. When a new technology fails in an animal validation experiment, it is difficult to be patient and persistent to try it many many times to address the failure in a systematic way working together with the engineers.
- The path could be more specific programs (workshops, meetings, conference sessions, funding opportunities) to train both engineers and neuroscientists on the subject. Such programs should also involve basic lectures, short courses, etc to make sure that both parties are well-trained on the fundamentals of each topic. Otherwise, even if you can mass-produce a new technology and disseminate it to many labs across the world, the chances of them to adapt it to replace their decades old techniques are very highly unlikely. Building that confidence requires more effort to be able to make neurotechnology development sustainable in the long term.
Ensuring the appropriate use, data privacy, and integrity in neurotechnologies. Addressing technology-hype and support the evidence-base. Better understand the different ethical, legal, and social implications of neurotechnologies for consumer markets and for clinical use. Engaging with the publics and build trust.
- Engage all stakeholders and translate ethical, legal and societal needs and opportunities into policies.
Commercialisation and the associated legal structures. Presumably a large majority of next-gen neurotech developed by university-based academics and specialist partners (e.g. electronics foundry) will be funded by philanthropic or government (tax-payer) funding. Next-gen neurotech will take many different forms and shapes, e.g. molecular vs. hardware. Can we create a common pathway for dissemination for very different types of neurotech? Each development project will involve different numbers, types and expectations of the parties involved. Given the different background to each project, how can we easily transfer new products to market?
- Gathering info and creating an evidence-base to inform future development and dissemination: It could be useful to gather all the challenges that individuals at the meeting have experienced in taking a new product from development to the general neuro community. Can standard forms of legal agreements be created to minimise the challenges of getting a new product to market? Are existing companies (that sell and support other neurotech) the best partners? What are the challenges for existing companies to take on new products. What are standard and acceptable mark-ups per unit of neurotech? Should a new legal entity for neurotech dissemination be created, that could be the go-to place for academically-developed neurotech to get to market? Does having one single entity (with sales, support and management staff, and standard contracts) to commercialise a product make everything simpler, rather than the combination of developer parties negotiating terms and sales? Is this realistic and plausible?
New technologies are not made robust enough for wide dissemination. One way in which interfaces fail to be robust is that the procedures required to use the technology are unreliable in the labs which have developed them or are very difficult to train others to carry out. A second specific area in which systems are not robust is the case of closed-loop experimental systems. In these cases, while limited efforts have been made to develop platforms that would allow algorithms to be moved from one lab to another, they have not gained wide adoption. Rather, the vast majority of closed-loop experiments end up as re-inventions of the wheel. Finally, a third area in which these approaches fail to be robust is the lack of open-source, intuitive software tools for data analysis. There is an unfortunate gap between software packages which assist with low-level data aggregation (i.e., setting up databases, making queries) and high-level data analysis.
- Software: Support for professional development of scientific computing tools is a well-recognized problem that is beginning to be addressed in the larger world of data science. Other fields of brain science (e.g., cognitive psych) have done a better job of identifying and creating opportunities for individual labs to share tutorials with each other to disseminate useful tools. Experimental neuroengineering can do a better job of this as well as a better job of providing polished tools to the community.
- Hardware: Some limited examples (OpenEphys/Intan, or the Miniscope project) have revolutionized tool development by providing high quality tools with enough documentation for wide adoption at a low enough price point that they can be easily evaluated. For next-generation tools, understanding how and what technologies to invest in that will enable the next generation of interfaces is a critical question.
The biggest problem, especially for hardware, is similar to much of commercial product development from research prototypes: The prototype is usually used by the same people who built it, and it only works if used in a very specific way, that is only known (and often not even consciously known) by those people. This is especially an issue for expensive, fragile, finicky equipment, where not knowing everything you are (and aren’t) doing can lead to major problems. Thus, it is not enough to replicate the equipment, you must also either make it more robust to human error, or somehow properly document all of the conscious and unconscious knowledge. This is especially an issue since this process of documenting and “robustifying” ones technology is comparatively boring, compared to the initial development, so it needs an extra push to happen.
- Partly by pulling techniques from industry, where there are whole groups of engineers whose job is to do just this (application engineers, product engineers and QA engineers in particular). Even in cases where there is not an industrial-scale budget for specialists, some of the protocols they use can be applied. Partly it helps if there is a solid motivation for the inventor of a technology to document/develop it in a “product” mindset, perhaps through some extra reward for doing so.
Broad Access
- Reinforcement of open access/sharing culture at all levels, and commensurate development of cyber- and other infrastructure that enables such collective approach