Home > Nanotechnology Columns > ONAMI > Nanomaterials-Biological Systems Interactions - Addressing the Complexity
President and Executive Director
It is becoming apparent that the complex task of understanding nanomaterials interactions with biological systems must be decomposed and worked on collaboratively at both the ab initio and heuristic/behavioral levels. An analogy with 60 years of progress in semiconductor chip design and simulation is suggested.
October 19th, 2008
Nanomaterials-Biological Systems Interactions - Addressing the Complexity
Imagine two Starbucks patrons on holiday in Manhattan. The first, from rural Oregon, exclaims in shock "Geez, a stupid cup of coffee costs more than 2 bucks here!" while the other, a Parisian, marvels in delight "Mon dieu, café au lait pour moins de 2 euros". OK, I know no Parisian would ever admit being pleased by Starbucks, but hang in there with me...
Expectations depend much on where you come from, and so it was at a terrific nanomaterials characterization workshop last week at NIST, where experts from the Nanotechnology Characterization Laboratory, National Cancer Institute, NIEHS, NIST, NIBIB, EPA, industry and academia gathered to review initial results on a first set of inter-laboratory experimental data on physical properties and biological effects of five different sets nanomaterials believed to be quite pure (3 NIST reference Au nanoparticles and two dendrimers). Even in this comparatively simple/well-controlled setup, there were several revelations.
First, consistent measurement results of any kind are difficult to achieve, and only the greatest care produces valid data. More on that later.
Second, biological assays are a whole different ballgame than physicochemical work. The physical types (including me) in the room were stunned to hear that 20% variability in biological test results (e.g. cytotoxicity, hemolysis) is considered a great outcome - and just about the best that is achieved when "the best person in the lab" is given the job. The physicochemical folks declared they would be completely dissatisfied with any technique that didn't have 1% or better repeatability, and certainly wouldn't tolerate any operator dependence. We all wondered (and some asked out loud) how our bio brethren could possibly explain their complacency with such a dismal status quo? It was one of those teachable moments, and they answered. In a nutshell, living things don't just sit there, they adapt - to everything. Different proteins are expressed depending on time of day, lighting conditions, ambient, temperature etc... And of course one test cell or animal is almost certainly not atomically identical with the next one to begin with.
Indeed, the failure rates and variability on the bioassay legs of this first study under the auspices of the NanoHealth Enterprise "community of interest" were significantly higher than on the physicochemical side, but some excellent learnings were captured (e.g. ways to simplify sample preparation) that will lead to tighter distributions in the next set of experiments.
Anecdotally, we heard that other groups with more aggressive/ambitious nanomaterial characterization efforts are also discovering that getting useful data is proving more complicated and difficult than expected. The task of "just testing" nanomaterials for safety and quantifiable risk to humans (in the event of ingestion) is just not as easy as it sounds.
And yet, there is a common language developing - with valuable contributions from both the physics and bio sides of the aisle (which BTW is one of the really great wide open opportunities in interdisciplinary nanoscience) - regarding what is needed, and I had the sense that useful progress is poised to accelerate. But it's going to be a while before we completely understand - at a fundamental level - what is going on. One example: it is known that nanoparticles in biological "media" (e.g. different bodily fluids) acquire a loosely bound protein "corona" that surely affects their biological identity/surface chemistry (one of those things on the short list of what must be understood). Some may be tempted to think that as the outside layer of the particle, the makeup of that corona is "all you need to know". But if it is, doesn't that call into question the whole enterprise of nanoparticle surface functionalization, e.g. for binding to tumor sites? The task of ab initio studies of relevant mechanisms is obviously made much more challenging and complex by this issue.
Which leads me (not even a physicist, just an engineer) to some possibly obvious thoughts and analogies regarding practical decomposition of the nanomaterial biological effects problem.
The design of the first integrated circuits (with bits of doped silicon or germanium) was completely bottoms-up, with a lot of "cut and try" in the absence of critical understanding of things like crystal defects and interface charge states that are well-understood and controlled today. At roughly the same time, the first widely used high-level computer languages were coming out, and discrete element circuit simulation programs began to appear. 25 years after the invention of the transistor, these two trends met up in the Berkeley SPICE (Simulation Program with Integrated Circuit Emphasis, early versions written in FORTRAN) program, which for the first time incorporated pretty good 3-terminal transistor models (not simply equations governing motion of charges in pieces of semiconductor). SPICE is still used and sold, but it alone is far from enough to design and understand the billion-device chips of today. For that we need much more sophisticated work in both lower level device and process models (with roots in the Stanford SUPREM program in which I played a very small part) and high-level abstractions - "behavioral models" - of major circuit modules containing tens to millions of devices. Related advances continue today, but the overall effort is so complex that no one person ever works (or could) at all levels of detail simultaneously.
I think the effort to understand nanomaterials-biological system interactions is like this, and in fact analogous pieces are already in view. We have much of the fundamental science and molecular models (i.e. atom by atom 3D space locations). We have many (not all) of the needed physicochemical characterization tools and techniques (e.g. TEM, SEM, light scattering methods), and we have the "cut and try" (administer nanomaterials of varying purity/control to cells and animals, observe what happens and - if one is so inclined - issue a provocative press release). The latter needs to be better coordinated with the former, but there is no practical hope of doing it at every level of detail (remember the perverse complexity of the biological systems - even harder to deal with than nanomaterial purity). So we need (both now and later, with sophistication increasing as we learn) to incorporate high-level abstractions/behavioral/data mining approaches into these efforts. A prime example is the N-BI knowledge-base arising from our Safer Nanomaterials and Nanomanufacturing effort: see http://greennano.org and http://oregonstate.edu/nbi/pages/
Initially populated with embryonic zebrafish data on over 150 nanomaterials, the N-BI approach is to combine multiple observations (from morphological to genetic) into higher level toxicity abstractions such as the EZ-Metric (Embryonic Zebrafish Metric for Nanomaterial Toxicity). The bigger idea is a collaborative informatics project to combine as much such data as possible from multiple sources (e.g. from cells, yeast, fruitflies and more) with the right selection of physicochemical properties from the tested nanomaterial, and mine the resulting database to formulate hypotheses that would not otherwise have been apparent.
We're making great progress on this and more. We'd love to have you join us to discuss it all with a great set of speakers from around the nation at our 4th annual ONAMI Greener Nano conference March 2-3.
|Back for its 4th year...|
Event program and details may be found at http://oregonstate.edu/conferences/greenernano2009/