Home > Vol. 38, No. 1

CELLNET: Biomimetic Cell Interface Adaptable to Physical Stimuli and Environment
  • Hoon Yoon : Architecture, Art, and Planning (AAP), Graduate Student, Cornell University, New York, USA
  • Wendy Ju : Information Science, Associate Professor, Jacobs Technion-Cornell Institute at Cornell Tech, New York, USA
  • Farzin Lotfi-Jam : Architecture, Art, and Planning (AAP), Assistant Professor, Cornell University, New York, USA
  • Jenny Sabin : Architecture, Art, and Planning (AAP), Professor, Cornell University, New York, USA

Background CellNet leverages hormone-regulated processes in plant cells, such as growth and proliferation, to create an interactive interface responsive to user gestures and environmental inputs. The study investigates the integration of bio-inspired interfaces with user interactions, simulating plant cell mechanisms that respond to external stimuli like hand and body gestures within augmented reality (AR) and mixed reality (MR) environments. Using insights from Arabidopsis sepal hormone signaling, the study models artificial cell interfaces that mimic the morphogenetic and morphological traits of microscopic cells.

Methods The system incorporates sensors like the Leap Motion controller and Kinect V2 to detect user gestures, converting these actions into dynamic variables that influence the shape and behavior of the cell interface. These inputs are processed through Grasshopper, a computational programming tool in Rhino3D, to model a cell interface morphologically responsive to a user’s physical motions. To embody the biomimetic mechanism, a Kangaroo Physics plugin for Grasshopper mainly enables the simulation of interface responses, while the Fologram plugin and the Microsoft HoloLens2 HMD integrate the interfaces into mixed reality environments. This setup aligns virtual models with real world spaces, offering an immersive and interactive user experience. Iterative testing ensures that the interface remains responsive and engaging during real-time interactions.

Results The research demonstrated that CellNet successfully produces bio-inspired interfaces capable of adapting to user gestures in real time. However, rendering challenges emerged with large datasets and complex simulations. To overcome these issues, script optimizations were implemented, including reducing visual complexity and simplifying parameter values. These adjustments greatly improved performance and responsiveness. Despite the obstacles, CellNet proved to be a reliable platform for exploring biological principles in interactive and immersive ways.

Conclusions CellNet presents a unique opportunity to simulate biological processes through interactive, holographic interfaces. It allows users to engage with biomimetic designs, opening new avenues for studying and visualizing biological systems. The study concludes with the vision of extending the capabilities of this platform to broader scientific and educational applications in the future.

Keywords:
Gestural User Interface, Biomimetic Design, Graphic User Interface (GUI), Mixed Reality (MR), Extended Reality (XR).
pISSN: 1226-8046
eISSN: 2288-2987
Publisher: 한국디자인학회Publisher: Korean Society of Design Science
Received: 23 Oct, 2024
Revised: 31 Jan, 2025
Accepted: 31 Jan, 2025
Printed: 28, Feb, 2025
Volume: 38 Issue: 1
Page: 7 ~ 29
DOI: https://doi.org/10.15187/adr.2025.02.38.1.7
Corresponding Author: Hoon Yoon (hy358@cornell.edu)
PDF Download:

Citation: Yoon, H., Ju, W., Lotfi-Jam, F., & Sabin, J. (2025). CELLNET: Biomimetic Cell Interface Adaptable to Physical Stimuli and Environment. Archives of Design Research, 38(1), 7-29.

Copyright : This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/), which permits unrestricted educational and non-commercial use, provided the original work is properly cited.

1. Introduction

CellNet investigates the potential of bio-inspired artificial cell interfaces that can be controlled through external stimuli like hand and body gestures. By embedding plant cell biological mechanisms, these interfaces—designed for either 2D screens or 3D Mixed Reality (MR)—aim to interact with users through biomimetic, shape-changing behaviors. The system draws on biological principles, using the hormone signaling processes found in Arabidopsis sepals as a foundation for modeling the structural and responsive traits of the cell interfaces (Zhu et al., 2020). Research into the growth patterns of Arabidopsis Thaliana sepals, influenced by hormones like Auxin and Cytokinin, provided key insights into CellNet’s interface designs, particularly in shaping user interaction schemes.

The study encompasses two main areas: (a) a cell interface that responds to hand gestures and (b) a spatial interface that reacts to body gestures and surrounding environments (Figure 1). Sensors and Aruco Markers track hand and body movements in real-time, converting them into input variables for computational models (see “Sensing Variables” for Models #1 and #2 in Figure 1).


Figure 1 Workflow overview that illustrates how the physical movements of a user and external space could be sensed and translated into variables that influence the script of a cell interface rendered in MR

Using Grasshopper® scripts inspired by biological mechanisms, these input variables influence the shape and growth behaviors of the interfaces. Integration with the Fologram® plugin and the Microsoft® HoloLens2 HMD renders these adaptive interfaces in Mixed Reality, where they respond dynamically to users’ gestures and environmental contexts.

To facilitate this process, CellNet uses a Leap Motion® controller and Microsoft Kinect V2 to track gestures in detail. The system captures hand and body movements through 3D skeletal frames and computes variables like XYZ coordinates, velocity, proximity, and distance from the ground. These metrics are processed in Grasshopper and translated into input parameters, enabling the interface to morph based on user activity. Whether visualized in Rhino3D® or within a Mixed Reality environment, the interfaces demonstrate plant cell-inspired transformations triggered by user gestures.

By establishing a taxonomy of morphological responses (Figure 2), the study lays the groundwork for interface modeling and scripting. Influenced by biological precedents like the morphogenesis of Arabidopsis sepals (Roeder, 2021) and design principles from artificial cell morphogenesis research (Klemmt et al., 2015), the system translates user gestures into structural changes. Beyond simple interactions, CellNet extends to spatial interfaces that adjust their volume and behavior according to user movements and physical surroundings. As an interactive platform, it functions as a non-verbal simulation tool for exploring biological systems.


Figure 2 Basic guidelines for programming the morphological dynamics of a cell interface. Reprinted from the first author’s master thesis“CELLNET: Bio-Inspired Adaptive Cellular Interface in Mixed Reality”, submitted to Cornell University, 2023
2. Related Studies
2. 1. Gesture Recognition

CellNet’s interactive capabilities build on existing research in gesture recognition, which categorizes and standardizes human gestures for computational use. Studies utilizing machine learning frameworks have refined the recognition of hand movements by filtering and training models for specific gestures (Alteaimi et al., 2022; Nogales et al., 2020; Naguri et al., 2017). Researchers have also classified taxonomies of gestures to facilitate more consistent and intuitive gesture recognition systems (Carfi et al., 2023; Haria et al., 2017). Using vision-based technology, gestures can be detected and used as expressive tools for communication (Zabulis et al., 2009).

2. 2. Gestural User Interface

Based on an established gesture archetype, semiotic gestures or natural gestures work as a communicative signal for interacting with a wheeled mobile robot (Yun et al., 2024; Sato et al., 2007). Analogously, Zick et al. propose a hand gesture-driven framework for controlling multi-robots (Zick et al., 2024). Aside from these Human-Machine interaction via normed gesture recognition, gesture interaction is combined with pseudo-haptic feedback, providing a user with an immersive, real-like response (Gaucher et al., 2013). These interfaces extend to diverse applications, such as simplifying vehicle control (Riener, 2012), wearable devices that detect gestures (Ministry et al., 2009), and GUI-enabled reactive platforms like the inFORCE project (Nakasaki et al., 2019). Additionally, gesture-recognizing GUIs have been integrated into robotic fabrication workflows, offering holographic guidance in a timber-based crafting process (Kyaw et al., 2024).

2. 3. Gestural Recognition in MR

Studies employing hand gesture recognition in MR show possibilities of converging with adjacent hardware technologies. Markedly, sensor-embedded, wearable apparatus and devices are utilized as a sensory receptor that tracks and replicates physical gestures of a hand in MR. For instance, Li et al. devise the sensor-embedded NailRing to precisely capture micro-gestures whilst interplaying with an interface in MR (Li et al., 2022). Lee et al. propose a tactile glove that allows a user to perceive tangible interactions by synchronizing a virtual object with a physical one (Lee et al., 2010). Similarly, Kim et al. introduce a biosensing glove that reflects a pressure of pinching gesture and applies its sensorial data to the MR-driven object handling (Kim et al., 2023). Regarding using a smart device, Kim et al. utilize a smart watch to track a hand movement including limb orientation while interplaying with a virtual object in MR (Kim et al., 2018). Also, Lei et al. utilize Myo Armbands to meticulously sense a forearm’s electromyography (EMG) data while gesturing with fingers in MR (Lei et al., 2023).

2. 4. Bio-Inspired Computational Design

The development of cell interfaces draws from biological principles. Computational simulations and visualizations are pivotal tools in creating biomechanically-driven models that embody morphogenetic and structural traits of biology. Researchers are inspired by microscopic natural phenomena like cell growth and division, exploring how these can be replicated through computational methods. For instance, Klemmt et al. designed generative venation patterns by studying cellular behaviors and structural outcomes (Klemmt et al., 2015). Andy Lomas developed arithmetic models that mimic artificial cellular forms and their morphological growth (Lomas, 2014). Similarly, George W. Hart explored generative cell forms with diverse cell division algorithms (Hart, 2009). From a methodological standpoint, Ahlquist et al. proposed biomimetic approaches for computational modeling, translating biological growth and evolutionary processes into digital geometries and algorithms (Ahlquist et al., 2008). Collectively, these efforts bridge biological mechanisms and computational designs, enabling the creation of cell interfaces that embody dynamic, morphologically responsive features.

3. Design and Implementation

To simulate bio-inspired cell interfaces, it’s crucial to establish a framework linking real-time user data with modeling scripts. Initially, a cell interface interprets user gestures—captured via devices like Leap Motion controllers or Kinect V2—and processes this input through Grasshopper in two ways: (a) skeletal frames and points representing user movements, and (b) numerical parameters guiding the interface’s morphological responses. The Firefly plugin, combined with Leap Motion and Kinect SDKs, visualizes skeletal data, while user parameters, like fingertip coordinates or hand proximity, are used to program the interface’s reactions.

For bio-inspired responses, plant biology and computational design principles guide the morphological behavior of the interface. The Kangaroo Physics plugin (ver. 2.42) in Grasshopper simulates tectonic and volumetric movements. Plugins like Weaverbird enhance these simulations, creating detailed 3D structures like spikes that elongate in response to gestures, such as hand-lifting or pinching (Figure 3).


Figure 3 Diagram outlining the process of how sequential hand gestures interactively manage the spike structure of the cell interface. Reprinted from the first author’s master thesis“CELLNET: Bio-Inspired Adaptive Cellular Interface in Mixed Reality”, submitted to Cornell University, 2023

In subsequent phases, the interface is expanded into a Mixed Reality (MR) environment using the Fologram plugin and the HoloLens2 HMD. This setup integrates user gestures and environmental data, like Aruco marker arrays, to overlay a dynamic interface onto physical spaces.

3. 1. Model #1: Hand-Gesture-Responsive Cell Interfaces
3. 1. 1. Spike elongation responsive to hand lifting

The parametric setting that governs the morphological characteristics of a cell interface is a key factor that enables immersive user interaction. From a usability perspective, it is necessary that the interface maintains a certain level of sensitivity and responsiveness so that it can adapt in real-time to input from a user’s sensory actions. Accordingly, to determine an optimal degree of responsiveness, Table 1 presents a simulation focusing on how the spikes of a cell interface extend in response to the upward movement of a user’s hand.

Table 1
Simulation on cell spike elongation responding to hand-lifting motion. Reprinted from the first author’s master thesis“CELLNET: Bio-Inspired Adaptive Cellular Interface in Mixed Reality”, submitted to Cornell University, 2023

Input Parameters Stage 1 Stage 2 Stage 3 Stage 4 Stage 5 Stage 6 Stage 7 Stage 8
<Parameter 1>
Distance value between index finger’s point and the center point of a 2d surface on the ground(height of a hand)
15.572355 15.6838 15.624299 15.766315 15.52455 15.763275 15.725905 15.568403
<Parameter 2>
‘Weighting’ option of ‘Load’ component
0.055 0.1103 0.1986 0.2386 0.3124 0.339 0.4016 0.4316
<Parameter 3>
‘Strength’ option of ‘Anchor’ component
90.76 90.76 90.76 90.76 90.76 90.76 90.76 90.76
<Parameter 4>
‘Distance’ option of ‘Weaverbird’s Stellate/Cumulation’ component
0.798 2.834 6.149 10.049 18.583 29.633 46.798 59.008
<Parameter 5>
‘Length’ option of ‘Trimesh’ component
3 3 3 3 3 3 3 3
* All numerical data in the table are based on the default numerical unit of Grasshopper

Within this simulation, five Grasshopper components serve as the main input parameters: the distance between the index finger’s skeletal point and the center of a 2D ground plane (i.e., the user’s hand height); the “Weighting” option within the “Load” component; the “Strength” option in the “Anchor” component; the “Distance” option of the “Weaverbird’s Stellate/Cumulation” component; and the “Length” option of a “Trimesh” component. These are all linked to number sliders for adjustable numerical inputs. The value representing the hand’s height is captured via skeletal data (index finger to 2D surface center on the XY plane), and its numerical output from the “Distance” component is relayed to the “Sync Text Tag” component of a Fologram plugin for real-time display.


Figure 4 Cell spike elongation in response to a hand-lifting gesture. Reprinted from the first author’s master thesis“CELLNET: Bio-Inspired Adaptive Cellular Interface in Mixed Reality”, submitted to Cornell University, 2023

To monitor changes in the spikes’ elongation, several conditions are preset. First, parameter 3 remains at 90.76 to anchor the mesh structure to the surface. Second, the hand’s vertical distance (parameter 1) stays between 15.00 and 16.00, allowing for clear observation of morphological alterations driven by parameters 2 and 4. Since parameters 2 and 4 critically influence spike elongation, they are incrementally increased at each stage, while parameter 3 remains constant. Under these circumstances, the mesh exhibits progressive volumetric expansion and remains anchored to the 2D ground plane. In addition, to sustain the properly-inflated turgidity of a cell interface and inhibit unexpected morphological deformation by mesh explosion, parameter 1’s distance value variation is maintained with slight fluctuation between 15.52455 and 15.766315.

3. 1. 2. Classification of sequential hand gestural patterns

Categorizing non-linear hand gestures provides a guiding framework for developing gesture recognition. During this stage, diverse hand movements detected by the Leap Motion controller are organized as typological patterns (Figure 5). Integrated with plant cell-inspired morphological attributes (Figure 2), the interface responds to predetermined gestures—like clenching and opening the fist (and vice versa), hovering, and moving the hand up and down—by enacting specific morphological reactions.


Figure 5 Typological categorization of hand gestures for interacting with a cell interface using the Leap Motion controller. Reprinted from the first author’s master thesis“CELLNET: Bio-Inspired Adaptive Cellular Interface in Mixed Reality”, submitted to Cornell University, 2023
3. 1. 3. 2D-Based Cell Growth Regulatory System

Establishing a gesture-oriented simulation, the morphological dynamics of a cell interface are defined by a hormone-influenced growth regulation framework found in plant biology. Different hormone signals and their spatial distribution cause varying growth results (Zhu et al., 2020). Additionally, the presence and concentration of plant hormones like Auxin and Abscisic Acid (ABA) modulate the growth and proliferation of cells (Yue et al., 2021). Drawing on these principles, 2D cell interface prototypes are generated and assessed under various conditions. For modeling a simulation, a “Kangaroo Physics” plugin, and “SphereCollide,” “Voronoi,” “CurvePointCollide” components facilitate modeling both cell proliferation and growth inhibition (see lower right of Figure 7).

Figure 6 displays how cell interfaces proliferate under different scenarios. Image 1 shows an increase in cell density (from 30 to 250) by increasing the numerical value connected to a “Count” option of a “Populate Geometry” component to simulate cell division. In Image 2, a red cell wall is introduced to hinder further proliferation by linking a curve to the “CurvePointCollide” component, reducing expansion. In Image 3, multiple cell growth emission areas produce alternative proliferation outcomes.


Figure 6 Sequential demonstration of cell proliferation models: (1) default proliferation, (2) a cell wall introduced to limit growth, (3) multiple cell wall structures enabling selective growth areas. Reprinted from the first author’s master thesis“CELLNET: Bio-Inspired Adaptive Cellular Interface in Mixed Reality”, submitted to Cornell University, 2023

Figure 7 Grasshopper scripting for cell proliferation. Reprinted from the first author’s master thesis“CELLNET: Bio-Inspired Adaptive Cellular Interface in Mixed Reality”, submitted to Cornell University, 2023
3. 1. 4. 3D-Based Cell Growth Regulatory System for Hand Gestural Interaction

The mechanism discussed in 3.1.3, focusing on cell proliferation and an inhibiting cell wall, transitions to a 3D framework (Figure 8). In response to a user’s vertical hand movement, the region that fostered cell proliferation in 2D is transformed into a green, inflated geometry that expands toward the raising hand. In contrast, the region acting as an inhibitor becomes a blue, non-inflated geometry that demonstrates minimal response to the same vertical movement. By adjusting how the surface inflation mechanism reacts, this coded inhibitory region manages the interface’s morphological development. The outcomes become progressively distinct across the five stages as the hand-lifting distance increases (Table 2).


Figure 8 Conceptual diagram showing how 2D proliferation models transition into 3D cell proliferation models with regulatory systems. Reprinted from the first author’s master thesis“CELLNET: Bio-Inspired Adaptive Cellular Interface in Mixed Reality”, submitted to Cornell University, 2023
Table 2
Simulation data of a mesh structure governed by a cell growth regulatory system in response to vertical hand movement. Reprinted from the first author’s master thesis“CELLNET: Bio-Inspired Adaptive Cellular Interface in Mixed Reality”, submitted to Cornell University, 2023

Input Parameters Stage 1 Stage 2 Stage 3 Stage 4 Stage 5
Top Area Bottom Area Top Area Bottom Area Top Area Bottom Area Top Area Bottom Area Top Area Bottom Area
<Parameter 1>
Distance value between index finger’s point and the center point of a 2d surface on the ground(height of a hand)
11.754076 16.975323 23.490336 33.70654 43.910953
<Parameter 2>
‘Weighting’ option of ‘Load’ component
0.3146 0.1524 0.3146 0.1524 0.3146 0.1524 0.3146 0.1524 0.3146 0.1524
<Parameter 3>
‘Strength’ option of ‘Anchor’ component
98.4 98.4 98.4 98.4 98.4
<Parameter 4>
‘Distance’ option of ‘Weaverbird’s Stellate/Cumulation’ component
-25.2 10.955 -25. 10.955 -25.2 10.955 -25.2 10.955 -25.2 10.955
<Parameter 5>
‘Length’ option of ‘Trimesh’ component
3 3 3 3 3
Result Value Stage 1 Stage 2 Stage 3 Stage 4 Stage 5
Height value of an inflated mesh structure (Distance between inflated mesh structure’s mesh point and its projected point on the XY plane) 11.830459 11.830459 15.48973 10.071187 18.954637 11.482741 22.594235 12.933366 26.266357 14.328323
* All numerical data in the table are based on the default numerical unit of Grasshopper

Similar to 3.1.1, the distance between the hand and the cell interface is calculated using the index finger’s skeletal data and the center of a 2D circle comprising two mesh types. Unlike 3.1.1’s regulated range of a height of a hand, the height consistently increases with each stage, leading parameter 1 to climb while parameters 2 through 5 remain unchanged. Notably, parameter 4 is configured to implement opposite inflation directions for top and bottom sections, assigning negative values to the top region and positive to the bottom. Meanwhile, parameters 2 and 3 remain static to maintain mesh stability and preserve its attachment to the platform.

To gauge the distance between the user’s hand and the interface, mesh points on the inflated structure are initially labeled with Grasshopper’s “Point List” (Figure 9). Subsequently, these points (from both top and bottom regions) are projected onto the XY plane. The distance from each original mesh point to its ground projection is then recorded as the inflated mesh’s height (Figure 10).


Figure 9 Mesh point tagging on inflated structures for identification. Reprinted from the first author’s master thesis“CELLNET: Bio-Inspired Adaptive Cellular Interface in Mixed Reality”, submitted to Cornell University, 2023

Figure 10 Determining the inflated mesh’s height by measuring the space between a mesh point and its ground projection (in green lines to denote vertical distance). Reprinted from the first author’s master thesis“CELLNET: Bio-Inspired Adaptive Cellular Interface in Mixed Reality”, submitted to Cornell University, 2023

By examining the structural responsiveness of these meshes, height results listed in Table 2 are charted as line graphs in Chart 1. The comparison of gradients between the top and bottom areas shows that the graph for the top area has a slightly steeper slope, signifying that the top area’s mesh inflation is more sensitive to the incremental vertical movement of the hand compared to the bottom area. Moreover, as detailed in 3.1.4, the differential settings for the top and bottom areas (with negative and positive values in parameter 4) are intentionally configured to showcase how areas promoting cell growth and those inhibiting it can respond differently to the same hand motion.


Chart 1 Graph comparing the structural responsiveness of top and bottom meshes to increasing hand height. Reprinted from the first author’s master thesis“CELLNET: Bio-Inspired Adaptive Cellular Interface in Mixed Reality”, submitted to Cornell University, 2023
3. 2. Model #2: Spatial Cell Interfaces Responsive to Body Gestures and Environments
3. 2. 1. Cell Growth Regulatory System-Driven Interaction

Building on the hormone-based growth regulatory system introduced in 3.1.3, this phase applies the same principle to guide the morphological behavior and proliferation of cell interfaces. By leveraging the interplay between cell growth accelerators and inhibitors, this simulation focuses on two core concepts: (a) Cell interfaces that proliferate through a spatial cell-growth-accelerating region; (b) Users who function as inhibitors, regulating and limiting cell growth within a given space. In this simulation, a user essentially acts as a cell growth inhibitor while the cell interface freely disperses throughout the environment (Figure 11). The user’s occupied area—projected onto a 2D plane—blocks the expansion of cell centroids and constrains their directional vectors. This setup mirrors the cell wall mechanism from 3.1.3, where the user’s spatial footprint replaces the cell wall, governing cell proliferation and affecting the fluid-like motion of the interface in real time.


Figure 11 Simulations illustrating how a user can impede cell proliferation or influence collective cell direction by forming a 2D area (shown as a red “blob” around the user’s skeleton) on the ground where the cell interface grows. Reprinted from the first author’s master thesis“CELLNET: Bio-Inspired Adaptive Cellular Interface in Mixed Reality”, submitted to Cornell University, 2023

To create this system, the same algorithmic structure used in 3.1.3 is reused, augmented by the Metaball component for visually fluid cell interface growth. In the Grasshopper script, the cell interface’s centroids feed into the “points” input of the “Metaball” component, forming line-based fluid structures that interact with the user’s footprint. Notably, the cell interface scatters when the user walks through it, showing a fluid reaction triggered by the Metaball function.

Regarding user data, Figure 11 shows how skeletal points captured by Kinect V2 are projected onto the ground plane (XY Plane). These projected circles merge into a single blob-like shape using the “Region Union” component. Finally, linking this unified 2D geometry to the “CurvePointCollide” component designates it as a collision object for the cells. As a result, this 2D surface derived from the user’s skeleton serves as a cell growth inhibitor, curbing the interface’s expansion or steering the directional paths of proliferating cells (Figure 12).


Figure 12 Grasshopper script generating a user’s occupied area based on skeletal data. The red surface enveloping the skeleton in Figure 11 represents how a user’s space influences the collective movement of cells. Reprinted from the first author’s master thesis“CELLNET: Bio-Inspired Adaptive Cellular Interface in Mixed Reality”, submitted to Cornell University, 2023
3. 2. 2. Cell Growth Regulatory System-Driven Interaction for Multiple Body Gestures and Physical Surroundings

Using the same developmental logic, the simulation now scales up to incorporate multiple users and real-world architectural environments. Within the script, several new input variables are introduced: (a) Detection of multiple individuals; (b) Recognition of actual spatial conditions. Kinect V2 tracks each participant’s body motions, converting them into skeletons on the Rhino3D platform through the “Kinect V2 Skeleton Tracker” in Firefly. As in 3.2.1, these users act as cell growth inhibitors, limiting how far the cell interface can expand. Specifically, the 2D projection of users’ occupied areas interrupts and segments the Metaball blob, influencing the overall direction of the interface’s cell centroids (Figure 13).


Figure 13 Multi-user simulations of a cell interface regulated by a hormone-based growth system. Reprinted from the first author’s master thesis“CELLNET: Bio-Inspired Adaptive Cellular Interface in Mixed Reality”, submitted to Cornell University, 2023

In this multi-object simulation, architectural settings also serve as an inhibitory factor, comparable to the user-generated zones, thereby affecting the interface’s proliferation and shape. Additionally, the inhibiting area responding to the environment may adapt in a generative manner, depending on environmental conditions. To map the actual surroundings for Mixed Reality simulations, Aruco Markers are placed on the ground corresponding to the layout of real-world architecture (Figure 14).


Figure 14 Aruco Marker pads used for mapping physical environments. To enhance stability, 3D-printed pads (15 mm thick) are attached to the markers before laying them out on the ground. Reprinted from the first author’s master thesis“CELLNET: Bio-Inspired Adaptive Cellular Interface in Mixed Reality”, submitted to Cornell University, 2023

A small-scale simulation (Figure 15) demonstrates how environmental volumes are detected through Aruco Markers and subsequently influence the cell interface’s proliferation. By placing Aruco Markers on the ground near physical objects, devices running the Fologram app (smartphone, iPad, or HoloLens2 HMD) scan the markers’ XYZ positions and relay them to Grasshopper in real time. These points become control vertices for generating curves, which then form surfaces (the blue areas in Figure 15). Together with user-occupied zones, these environmental surfaces act as inhibitors, limiting the spread of the cell interface’s centroids as they proliferate.


Figure 15 Speculative simulation of how Aruco Markers map the environment, restricting the cell interface’s growth. Reprinted from the first author’s master thesis“CELLNET: Bio-Inspired Adaptive Cellular Interface in Mixed Reality”, submitted to Cornell University, 2023

To enhance the visual portrayal of proliferation, the interface employs a 3D Metaball system based on line segments (Figure 16). In Grasshopper, the “Metaball(t) Custom” component is used alongside an array of reference XY planes spaced perpendicularly on the platform, creating a layered Metaball structure. Assigning varying “charging” values to each cell centroid with the “Gene Pool” component controls the density of these stacked Metaball lines.


Figure 16 Speculative simulation showing how a 3D Metaball fluid system animates the proliferation of a cell interface constrained by user interactions and environmental factors. Reprinted from the first author’s master thesis“CELLNET: Bio-Inspired Adaptive Cellular Interface in Mixed Reality”, submitted to Cornell University, 2023
3. 2. 3. Augmenting a Spatial Cell Interface in Mixed Reality

As explained in 3.2.2, a set of Aruco Marker pads (approximately 16 to 24, with markers affixed to 3D-printed pads) is prepared and placed near physical objects and architectural features such as beams, columns, chairs, and tables (Images 1 and 2 of Figure 17). Once the markers are arranged, a user with an iPad running the Fologram app, or wearing a HoloLens2 HMD with the same app, begins scanning each marker at close range. As the markers are scanned, their positions are converted into XYZ coordinates, which appear as points in Rhino3D in real time (see the red points near the generated red surface in Image 4). Meanwhile, the unique marker labels are compiled in Grasshopper’s recording panel (the yellow panel on the right of Image 5). By entering integer values in the “Degree” and “Periodic” options for the “Nurbs Curve” component, the newly created points yield a nurbs curve, which is then unified into a single closed curve. This curve is subsequently turned into a surface (the red surface in Image 4). Consequently, anyone wearing the HoloLens2 HMD sees this surface blended into the real environment through mixed reality (Image 3 of Figure 17).


Figure 17 Process illustrating how arrayed Aruco Markers detect the physical environment and transform it into a red surface in Mixed Reality. This generated surface, which represents the physical environment, influences both the proliferation of the cell interface and the multidirectional movement of its centroids. Reprinted from the first author’s master thesis“CELLNET: Bio-Inspired Adaptive Cellular Interface in Mixed Reality”, submitted to Cornell University, 2023

In Image 3 of Figure 17, the nurbs curve from the Aruco Markers is linked to the “Curve” option of the “CurvePointCollide” component. As shown in Figures 15 and 16, the resulting red surface signifies the physical environment, functioning as a growth inhibitor that constrains the cell interface’s local proliferation—akin to the user’s blob area that shapes morphological changes in the interface. In conjunction with the HoloLens2 HMD’s mixed reality functionality, the interface’s proliferation and reactions to the recognized red surface (derived from real environmental elements) are rendered in real time.

Relying on this method of detecting the physical environment and converting it into a 2D graphic surface in MR, simulations of a cell growth regulatory system-driven interface are carried out (Figure 18). Viewing the simulation from a third-person perspective in the Rhino3D modeling space (Image 1 of Figure 18), the user wearing a HoloLens2 HMD interacts with the holographic cell interface layered over the physical setting.


Figure 18 Simulations of a growth-regulated cell interface that responds to user body movements and physical surroundings identified by Aruco Markers. (1) Real-time simulation in Rhino3D’s modeling space while placing a full-scale interface in the actual environment, (2) A metaball fluid structure reacting to the user’s movement, (3) Fluid interaction of the metaball-based interface as it encounters the environment’s area (shown as a faint red surface on the left) defined by Aruco Markers, (4) Line-based metaball structure rendered in Mixed Reality, displaying fluid behavior. Reprinted from the first author’s master thesis“CELLNET: Bio-Inspired Adaptive Cellular Interface in Mixed Reality”, submitted to Cornell University, 2023

Within the mixed reality environment, a metaball fluid structure integrated into the cell interface’s proliferation model engages with user gestures—like walking or clapping—by tracking skeletal frames through Kinect V2. Moreover, the metaball structure’s growth and motion are deflected by the red surface formed by the spatial arrangement of Aruco Markers. Additionally, to react to a user’s movements on the floor, the user’s 2D footprint (calculated from Kinect V2’s skeleton data) is recorded in Grasshopper as a prerequisite. As a result, this 2D footprint functions as a growth inhibitor, affecting both the interface’s proliferation and the metaball structure’s fluid reactions in an interactive manner (Image 2 of Figure 18).

Subsequently, the simulation of the growth-regulated cell interface expands into multi-user scenarios under the same developmental and environmental criteria (Figure 19). Anchored by the generated surface (green area on the left side of Image 1 in Figure 19) created with Aruco Markers around the chosen space, a metaball-based cell interface proliferates. As it does, the fluid-like morphology of the metaball structure changes upon contacting user-defined or environment-defined boundaries.


Figure 19 Multi-user simulations of a growth-regulated cell interface. (1) Users’ skeletal frames as they engage with the interface through body gestures and hand movements, (2) A close-up of the line-based fluid structure as seen through a HoloLens2 HMD, (3) A user experiencing the holographic interface overlaid in Mixed Reality, (4) A holographic interface responding to hand gestures, where proliferation rate, volumetric behavior of the metaball structure, and expansion levels are tied to the proximity values detected from the user’s moving hand skeleton. Reprinted from the first author’s master thesis“CELLNET: Bio-Inspired Adaptive Cellular Interface in Mixed Reality”, submitted to Cornell University, 2023

To preserve the volume of the metaball structure even as the cell interface expands, the maximum input value for the “Radius” setting of the “Sphere Collide” component—which determines proliferation speed and fluidity—is fixed at 108.00 when receiving user input. Values above 108.00 are purposely scaled down by multiplying with a decimal factor. This way, user inputs remain under the threshold. Under these constraints, user proximity values, measured from their hands’ skeleton points, feed into the “Radius” parameter of the “Sphere Collide” component, causing the interface to expand in step with a user’s hand movements (Image 4 of Figure 19).

3. 2. 4. Applying the MR-Enabled Spatial Cell Interface to Architectural Surroundings

To explore the interface’s site-specific adaptability, additional simulations of the growth-regulated cell interface were conducted in diverse locations with varied architectural and environmental features (Table 3). Since reliable Wi-Fi was necessary to synchronize the modeling data with the iPad or a HoloLens2 HMD, most simulations were carried out in designated indoor areas. In those spaces, permanent and temporary architectural elements—like partition walls or sponge stands for exhibitions—provide environmental factors for the interface to interact with in Mixed Reality.

Table 3
Simulations of a growth-regulated cell interface in multiple architectural contexts. Reprinted from the first author’s master thesis“CELLNET: Bio-Inspired Adaptive Cellular Interface in Mixed Reality”, submitted to Cornell University, 2023

Conditions Simulation 1 Simulation 2 Simulation 3 Simulation 4 Simulation 5
MR Simulation in the actual environments
<Outcome 1>
Recognized area of targeted physical environment via ‘Aruco Markers’
<Outcome 2>
Voronoi-based cell interface towards recognized environmental area
<Outcome 3>
2D Metaball structure-based cell interface towards recognized environmental area
<Outcome 4>
3D Metaball structure-based cell interface towards recognized environmental area
Venue for the simulation Dome area of Milstein Hall, Cornell AAP Dome area of Milstein Hall, Cornell AAP Dance floor area of Milstein Hall, Cornell AAP 2rd floor of Sibley Hall, Cornell AAP 3rd floor of Sibley Hall, Cornell AAP
Venue for the simulation Movable partition walls, Sponge pedestal stands Diagonal pillar, long bench H-Beam Movable partition walls, Pillar Movable partition walls, Movable TV stand, Chair
Total number of Aruco Marker pads used 18ea 18ea 26ea 21ea 28ea
<Parameter 1>
‘Radius’ option of ‘Sphere Collide’ component
108.00 55.88 90.32 82.82 82.82
<Parameter 2>
‘Strength’ option of ‘Sphere Collide’ component
100.0 100.0 100.0 100.0 100.0
<Parameter 3>
‘Degree’ option of ‘Nurbs Curve’ component
1.2 1.2 7.3 7.3 7.3
<Parameter 4>
‘Count’ option of ‘Populate Geometry’ component
38 38 38 38 38
<Parameter 5>
‘Strength’ option of ‘CurvePointCollide’ component
97.534 97.534 64.874 64.874 64.874
* All numerical data in the table are based on the default numerical unit of Grasshopper

As in the earlier simulations, an array of Aruco Marker pads is placed around the targeted zone to detect that portion of the physical environment and convert it into a regulating surface. Through this process, the Aruco Markers’ XYZ coordinates are registered in Rhino3D and Grasshopper, where they are used to generate surfaces.

Due to each site’s layout of architectural or environmental features, the recognized area in each simulation produces varied results (Outcome 1 in Table 3). Each simulation employed different edge parameters when forming the surface from the scanned Aruco Markers’ coordinates. In Parameter 3 of Table 3, the “Degree” option for the “Nurbs Curve” component was set to 1.2 in Simulation 1 and 2, yielding more angular, non-filleted edges. Conversely, Simulation 3, 4, and 5 used a same “Degree” option of 7.3, resulting in smoother, rounded edges.

Based on these differently generated surfaces—capturing real-space elements—the interface’s proliferation model was tested. Observing how the interface responded morphologically to each environment’s recognized area, the surfaces in Simulation 1, 4, and 5 had a more pronounced effect on the interface’s shape than in other cases. Structurally, the cell centroids in Simulation 1, 4, and 5 began in a compressed state, reflecting narrower corners in the recognized real-world environment. Combined with the presets in Parameter 1 and 5, which regulate the interface’s expansion rate and velocity, this congestion influenced the collective direction of the cell centroids, eventually triggering the interface to burst into more open areas. This effect is clearly visible in Outcome 4 of Simulation 4 and 5.

4. Conclusion
4. 1. Discussion

Drawing from microscopic biological traits, CellNet investigated how bio-inspired interfaces can interact with users and the physical world. By simulating plant cell biology in a computational setting, the system illustrates the possibility of reinterpreting and extending the morphogenetic qualities of cells into Mixed Reality environments.

During the development of user-responsive CellNet interfaces, determining the number of cell centroids was a key factor in managing overall density. To avoid lag in the MR environment when using a HoloLens2 HMD, the “Count” value in the “Populate Geometry” component—a parameter controlling how many centroids are generated—was set to 38 in most Kinect V2-driven simulations. This figure was slightly adapted based on the real-time rendering needs of each simulation.

Nevertheless, even when the cell centroid count was capped at 38, software or network glitches occasionally occurred, especially during two-user simulations that connected the Fologram app on an iPad to a Grasshopper script. To address these challenges, the model was optimized to reduce computing overhead by disabling or removing unnecessary visible components in Grasshopper. These adjustments partially improved real-time rendering for multi-layered cell interfaces (such as those generated by 2D Voronoi, 2D Metaball, and 3D Metaball structures). Alongside ongoing script optimizations, we also discussed broader usability concerns, identifying prospective audiences and potential expansions for CellNet.

4. 2. Future Usability

Biological processes such as cell growth, elongation, and division served as foundational principles in CellNet, enabling innovative user interactions within both modeling platforms and MR environments integrated with real-world spaces. Building on this concept, CellNet could be adapted as a specialized testing platform for biologists, providing a tool to integrate and simulate their data and biological rules. Researchers across various disciplines could introduce their own parameters and principles into CellNet, creating unique interface types informed by new biological insights.

To further improve accessibility, the next version of CellNet will prioritize enhancing the user experience by incorporating features like a controller panel-equipped interface, complete with a step-by-step tutorial or graphic guidance. These additions will help users intuitively understand the underlying principles of bio-mechanisms.

This framework allows biologists to explore large-scale, interactive models of morphological and morphogenetic behavior. Future developments of CellNet will likely include usability testing with diverse biological specimens and user groups, particularly biologists. By expanding user engagement, CellNet aims to support the simulation, analysis, and spatial exploration of emerging biological theories and phenomena. The integration of MR headsets, capable of visualizing volumetric, dynamic interfaces controlled by specific hand and body movements, offers users an intuitive way to verify parameter-driven mechanisms in real-time. Ultimately, CellNet has the potential to become a collaborative, interactive platform that evolves further through ongoing partnerships with the scientific and biological research communities.

Acknowledgments

As mentioned, this study is based on the first author’s master thesis, titled “CELLNET: Bio-Inspired Adaptive Cellular Interface in Mixed Reality,” submitted to Cornell University in 2023. Also, ChatGPT 4o (OpenAI, 2024) was partly utilized to aid in polishing the grammar of the manuscript.

This study is based on the first author (Hoon Yoon)’s master thesis, titled “CELLNET: Bio-Inspired Adaptive Cellular Interface in Mixed Reality” submitted to Cornell University in 2023.

References
  1. 1 . Ahlquist, S., & Fleischmann, M. (2008). "Material & Space: Synthesis Strategies based on Evolutionary Developmental Biology." Proceedings of the 28th Annual Conference of the Association for Computer Aided Design in Architecture (ACADIA), pp. 67-71. [https://doi.org/10.52842/conf.acadia.2008.066]
  2. 2 . Alteaimi, A., & Othman, M. B. (2022). Robust interactive method for hand gestures recognition using machine learning. Computers, Materials & Continua, 72(1), ISSN 1546-2218, pp. 577-595. [https://doi.org/10.32604/cmc.2022.023591]
  3. 3 . Carfì, A., & Mastrogiovanni, F. (2021). Gesture-based human-machine interaction: Taxonomy, problem definition, and analysis. IEEE Transactions on Cybernetics, 53(1), 497-513. [https://doi.org/10.1109/TCYB.2021.3129119]
  4. 4 . Gaucher, P., Argelaguet, F., Royan, J., & Lécuyer, A. (2013, March). A novel 3D carousel based on pseudo-haptic feedback and gestural interaction for virtual showcasing. In 2013 IEEE Symposium on 3D User Interfaces (3DUI) (pp. 55-58). IEEE. [https://doi.org/10.1109/3DUI.2013.6550197]
  5. 5 . Haria, A., Subramanian, A., Asokkumar, N., Poddar, S., & Nayak, J. S. (2017). Hand gesture recognition for human computer interaction. Procedia computer science, 115, 367-374. [https://doi.org/10.1016/j.procs.2017.09.092]
  6. 6 . Hart, G. (2009, July). Growth forms. In Proceedings of Bridges 2009: Mathematics, Music, Art, Architecture, Culture (pp. 207-214). Retrieved from: http://archive.bridgesmathart.org/2009/bridges2009-207.html.
  7. 7 . Katsikopoulou, M. (2021). ryoichi kurokawa superimposes 3D data of architecture + nature into mind-bending installations. deisgnboom. Retrieved from: https://www.designboom.com/art/ryoichi-kurokawa-3d-data-architecture-ruins-nature-mind-bending-installations-12-31-2021/ (Accessed: Apr 16, 2023).
  8. 8 . Kim, H. I., Lee, J., Yeo, H. S., Quigley, A. J., & Woo, W. (2019, April). SWAG demo: Smart watch assisted gesture interaction for mixed reality head-mounted displays. In Adjunct Proceedings-2018 IEEE International Symposium on Mixed and Augmented Reality, ISMAR-Adjunct 2018 (pp.428-429). Institute of Electrical and Electronics Engineers Inc..DOI:. [https://doi.org/10.1109/ISMAR-Adjunct.2018.00130]
  9. 9 . Kim, J., Bayro, A., Lee, J., Soltis, I., Kim, M., Jeong, H., & Yeo, W. H. (2023). Mixed reality-integrated soft wearable biosensing glove for manipulating objects. Biosensors and Bioelectronics: X, 14, 100343. [https://doi.org/10.1016/j.biosx.2023.100343]
  10. 10 . Klemmt, C., & Bollinger, K. (2015, September). Cell-Based Venation Systems. In Real Time: Proceedings of the 33rd eCAADe Conference (Vol. 2, pp. 573-580). [https://doi.org/10.52842/conf.ecaade.2015.2.573]
  11. 11 . Kyaw, A. H., Spencer, L., & Lok, L. (2024). Human-machine collaboration using gesture recognition in mixed reality and robotic fabrication. Architectural Intelligence, 3(1), 11. [https://doi.org/10.1007/s44223-024-00053-4]
  12. 12 . Lee, J. Y., Rhee, G. W., & Seo, D. W. (2010). Hand gesture-based tangible interactions for manipulating virtual objects in a mixed reality environment. The International Journal of Advanced Manufacturing Technology, 51, 1069-1082. [https://doi.org/10.1007/s00170-010-2671-x]
  13. 13 . Lei, Y., Deng, Y., Dong, L., Li, X., Li, X., & Su, Z. (2023). A Novel Sensor Fusion Approach for Precise Hand Tracking in Virtual Reality-Based Human-Computer Interaction. Biomimetics, 8(3), 326. [https://doi.org/10.3390/biomimetics8030326]
  14. 14 . Li, T., Liu, Y., Ma, S., Hu, M., Liu, T., & Song, W. (2022, October). NailRing: An Intelligent Ring for Recognizing Micro-gestures in Mixed Reality. In 2022 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) (pp. 178-186). IEEE. [https://doi.org/10.1109/ISMAR55827.2022.00032]
  15. 15 . Lomas, A. (2014, April). Cellular forms: an artistic exploration of morphogenesis. In SIGGRAPH studio (pp. 1-1). [https://doi.org/10.1145/2619195.2656282]
  16. 16 . Mistry, P., Maes, P., & Chang, L. (2009). WUW-wear Ur world: a wearable gestural interface. In CHI'09 extended abstracts on Human factors in computing systems (pp. 4111-4116). [https://doi.org/10.1145/1520340.1520626]
  17. 17 . Mo, G. B., Dudley, J. J., & Kristensson, P. O. (2021, May). Gesture knitter: A hand gesture design tool for head-mounted mixed reality applications. In Proceedings of the 2021 CHI conference on human factors in computing systems (pp. 1-13). [https://doi.org/10.1145/3411764.3445766]
  18. 18 . Naguri, C. R., & Bunescu, R. C. (2017, December). Recognition of dynamic hand gestures from 3D motion data using LSTM and CNN architectures. In 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA) (pp. 1130-1133). IEEE. [https://doi.org/10.1109/ICMLA.2017.00013]
  19. 19 . Nakagaki, K., Fitzgerald, D., Ma, Z., Vink, L., Levine, D., & Ishii, H. (2019, March). inforce: Bi-directionalforce'shape display for haptic interaction. In Proceedings of the thirteenth international conference on tangible, embedded, and embodied interaction (pp. 615-623). [https://doi.org/10.1145/3294109.3295621]
  20. 20 . Nogales, R., & Benalcázar, M. E. (2020). A survey on hand gesture recognition using machine learning and infrared information. In Applied Technologies: First International Conference, ICAT 2019, Quito, Ecuador, December 3-5, 2019, Proceedings, Part II 1 (pp. 297-311). Springer International Publishing. [https://doi.org/10.1007/978-3-030-42520-3_24]
  21. 21 . OpenAI. (2024). ChatGPT Model: ChatGPT 4o | Temp:0.7 [Large Language Model]. https://openai.com/chatgpt/.
  22. 22 . Park, J., & Hong, J. H. (2024, October). HoloGesture: A Multimodal Dataset For Hand Gesture Recognition Robust To Hand Textures On Head-Mounted Mixed-Reality Devices. In 2024 IEEE International Conference on Image Processing (ICIP) (pp. 1-7). IEEE. [https://doi.org/10.1109/ICIP51287.2024.10648161]
  23. 23 . Riener, A. (2012). Gestural interaction in vehicular applications. Computer, 45(4), 42-47. [https://doi.org/10.1109/MC.2012.108]
  24. 24 . Roeder, A. H. (2021). Arabidopsis sepals: A model system for the emergent process of morphogenesis. Quantitative Plant Biology, 2, e14. [https://doi.org/10.1017/qpb.2021.12]
  25. 25 . Sato, E., Yamaguchi, T., & Harashima, F. (2007). Natural interface using pointing behavior for human-robot gestural interaction. IEEE transactions on Industrial Electronics, 54(2), 1105-1112. [https://doi.org/10.1109/TIE.2007.892728]
  26. 26 . Yoon, H. (2023). CELLNET: Bio-Inspired Adaptive Cellular Interface in Mixed Reality. In Cornell Theses and Dissertations. Cornell University Graduate School. [https://doi.org/10.7298/pzpd-xf96]
  27. 27 . Yue, K., Lingling, L., Xie, J., Coulter, J. A., & Luo, Z. (2021). Synthesis and regulation of auxin and abscisic acid in maize. Plant Signaling & Behavior, 16(7), 1891756. [https://doi.org/10.1080/15592324.2021.1891756]
  28. 28 . Yun, S., Park, H., & Lee, H. S. (2024, October). Hand Gesture Recognition Framework for Indoor Wheeled Mobile Robots Using Hand Shape and Pose. In 2024 24th International Conference on Control, Automation and Systems (ICCAS) (pp. 1349-1352). IEEE. [https://doi.org/10.23919/ICCAS63016.2024.10773185]
  29. 29 . Zabulis, X., Baltzakis, H., & Argyros, A. A. (2009). Vision-Based Hand Gesture Recognition for Human-Computer Interaction. The universal access handbook, 34, 30. [https://doi.org/10.1201/9781420064995-c34]
  30. 30 . Zhu, M., Chen, W., Mirabet, V., Hong, L., Bovio, S., Strauss, S., ... & Roeder, A. H. (2020). Robust organ size requires robust timing of initiation orchestrated by focused auxin and cytokinin signalling. Nature plants, 6(6), 686-698. [https://doi.org/10.1038/s41477-020-0666-7]
  31. 31 . Zick, L. A., Martinelli, D., Schneider de Oliveira, A., & Cremer Kalempa, V. (2024). Teleoperation system for multiple robots with intuitive hand recognition interface. Scientific Reports, 14(1), 1-11. [https://doi.org/10.1038/s41598-024-80898-x]