THE BODYLESS SOUND AND THE RE-EMBODIED SOUND: AN EXPANSION OF THE SONIC BODY OF THE INSTRUMENT

: The development of recording technologies, audio manipulation techniques, and sound synthesis opened new sonic horizons. At the same time, realising or reproducing these new sounds creates issues of disembodiment and/or a total lack of physical-gesture-to-audio relationship. Understanding the impact these issues have on our perception and comprehension of music becomes central in the light of new creative practices, in which developing hardware and software has become part of the creative process. These creative practices force us to re-think the role of performance and the medium (musical instruments) in the essence of the musical work. Building upon previous research, a set of possible configurations for hyperinstrument design is presented in this article with the aim to introduce novel ways of thinking about the relationship of the physical body of the instrument (resonant body), the sonic body (the acoustic phenomena unfolding in a physical space), and performance. invocó


ABSTRACT:
The

Introduction
For centuries, music has been produced by interacting with instruments that rely on their physical bodies to produce sound. The human voice, the oldest musical instrument, has sounding characteristics-such as timbre or range-defined by its physicality. This is a fact that we cannot escape, even though it is true that we can train our bodies, vocal cords, diaphragms, tongues, lips, or any of the muscles involved in the production of sound. Even after developing the highest singing skills, we cannot go beyond what is physically possible for our bodies and produce a sound that has the range, power, or articulation possible on an instrument such as the trumpet.
Similarly, other characteristics of sound depend on the physical action that triggers it.
Each instrument that falls in the category of idiophones, for example, has a unique timbre defined by the material it is made from, its shape, and its size. Furthermore, this timbre unfolds-or develops over time as described by Denis Smalley's morphological archetypes (Smalley, 1986: 68)-depending on the way its body is excited, be it by being struck with a wooden stick or a felt mallet, or by being bowed with a violin bow or a threaded string. In this way, most musical instruments produce sounds that have specific characteristics that depend on the materials they are made of and the configuration of their bodies, as well as the actions through which they are performed.
Throughout the last century, new paradigms for music making have appeared, thus offering music makers innovative avenues for creativity. Advanced modes of sound production have arisen with the invention of audio recording technologies and electronic musical instruments. The characteristics of sound do not longer remain tied to the acoustic properties of a body that produces it, nor to the physicality of a performative action. These new sounds can be shaped in many ways through different techniques and prowess in sound manipulation. Some sounds can be created from scratch using various sound synthesis techniques. Others are the result of the manipulation of audio recordings. I call these sounds bodyless sounds, a term that encompasses Schaeffer's sound object and Schafer's schizophonia concepts to theorise the acoustic phenomena of sound as a sort of spatiotemporal ethereal body or sonic body (discussed in the following section).
The loudspeaker, being probably one of the most useful inventions of the new sound production paradigms, could only produce a specific set of sounds with a relatively short range of possibilities if it produced sound based only on the physical properties of its construction. Through a complex and clever system, however, combined with processed signal and electric power, the loudspeaker-driven by its transducer that helps in converting electrical signal into mechanical motion-is capable of producing sounds that would only be naturally created by other vibrating or resonating bodies, not the loudspeaker itself. In this way, sounds that were originally produced, for instance by the vibrating body of a plucked string, are re-embodied in the physicality of the loudspeaker, taking new sounding characteristics unique to it. A 3-inch 4-watt 8-ohm loudspeaker will not produce the same sound that a 16-inch 1000-watt 8-ohm one is capable of, although both are processing the same signal. Sometimes, bodyless sounds are not created or transformed in a pre-production exercise but are the result of forcing objects to vibrate with the use of transducers, inserting the sound in a new body. This process fuses the original sound and the resonant characteristics of the excited body, as is the case in David Tudor's Rainforest (Driscoll and Rogalsky, 2004)none would seem to be better known than Rainforest IV, his large-scale performed installation of the 1970s. Although it has received widespread and well documented public performance, Rainforest's germination in the mid-1960s in elements of Bandoneon! (1966. It is important to note that the concept of re-embodied sound does not apply to the effects of reflection (reverberation) or refraction (amplitude change) as it is assumed that sound waves will interact naturally with different bodies, thus giving sounds new acoustic characteristics (for instance the transformation of sound due to impulse responses). The re-embodied sound, in contrast, has been produced by a medium other than that which would normally produce it, as it happens with Murray Schafer's concept of schizophonia. The difference between Schafer's idea and the reembodied sound is that the latter does not care about the origin of the sound but only about the qualities of the new one and its "new" emission source body.
In this article, I delve into the concepts of the bodyless sound and re-embodied sound in two sections dedicated to understanding the implications of these concepts. In the second section, I list a few cases in the hyperinstrument field (electronically augmented instruments) where these ideas have become an important focus of research. Finally, reflecting on the development of my HypeSax-an augmented saxophone-I propose a set of configurations to expand the sonic body of a hyperinstrument.

The bodyless sound Audio emission source
To begin the reflection on the concept of bodyless sound, let us consider the audio emission source. In wind instruments, for instance, the fluctuation and the modification of the air column produce a pitched sound. It radiates outwards from the wind instrument's bodies. This is the paradigm of traditional acoustic instruments. Nevertheless, an alternative to the model appeared with the invention of electronic instruments and amplification through various types of transducers. Loudspeakers, to mention the most common example, provide an extension of the sound emission source while potentializing the capabilities of the instruments and their presence in space. At the same time, this invention presents new avenues for reflection on how the instrument is perceived.
Creative approaches to sound radiation and the effects of the manipulation of the source have been explored in works such as Rainforest by David Tudor, where a pre-recorded soundscape is reproduced through what he called 'instrumental speakers': a set of objects that-through the use of mounted transductors-acted as loudspeakers (Driscoll and Rogalsky, 2004)none would seem to be better known than Rainforest IV, his largescale performed installation of the 1970s. Although it has received widespread and well documented public performance, Rainforest's germination in the mid-1960s in elements of Bandoneon! (1966. Similarly, Alvin Lucier created works that challenged the traditional model of sound radiation. His Music on a Long Thin Wire (Lucier and Simon, 1980: 160) allows for acoustic phenomena of feedback to become a fundamental element of his work, while in I'm Sitting in a Room (Lucier and Simon, 1980: 30), he explores the effect of sound degradation as the quality of the resonance of the room take over. Equally important is the work of John Driscoll, which, in multiple compositions, focuses on the resonance in sculptural materials, architectural resonance, and feedback (Driscoll, 2012). These kinds of approaches created a paradigm in sound source expansion and the implications of different audio setups, which is further discussed here.
New hybrid instruments-including hyperinstruments-also employ hybridity in sound emission. By mounting transducers and loudspeakers directly on the acoustic instruments, it is possible to combine acoustically and electronically produced sound. Examples of similar setups are further discussed in the second section.

Instruments that produce bodyless sound: Electrophones
Every musical instrument is a canvas for creativity, especially in the hands of the expert musician, composer, or performer. Nevertheless, the number of sounds that can be produced using traditional acoustic instruments is limited due to the physicality of their construction. A wooden recorder, for instance, can only offer tones within a specific range.
These tones can vary in spectral content depending on the performing technique. With the use of more complex or extended techniques, the recorder can offer a new range of sounds, for instance by striking its body with the fingernails or rubbing a stick against its body and over the tone holes (alla güiro). In this case, all sounds carry the sounding qualities of the wood used to build the recorder, plus qualities of a second object used to produce sound with the recorder such as air, fingernails, or a stick.
The physicality of an instrument and the way in which we interact with it give each sound a unique identity defined by a set of characteristics. They-explored by Pierre Schaeffer Cristohper Ramos Flores in his Treatise on Musical Objects 1 -are better understood through the concept of spectromorphology proposed by Denis Smalley. This concept encapsulates the spectrum of a sound, consisting of the totality of its perceptible frequencies, and their relationship through the various stages of transformation of sound as it unfolds in time. The concept observes sound from its temporal origin to its end, describing its changes in dynamics (or morphology), and timbre (spectral typology). 2 The unique spectromorphology of each sound produced by a musical instrument is determined by its materials and construction, as well as the performing technique used to play it. These characteristics cannot be changed through traditional means. The musical instruments that fall under this paradigm can be classified under one of the following categories of the Hornbostel-Sachs system: idiophones, membranophones, chordophones, and aerophones. This catalogue groups instruments according to the ways in which they produce sound. All the aforementioned categories include instruments that produce sound by vibrating their body, one of their components, or a column of air running through their body.  (Schaeffer, 2017). In this book, Schaeffer introduced the concept of typomorphology of sound, an analytical tool referring to the typology and the morphology of sound. The difference between these two is one of function, 'typology seeks to identify and isolate sonic objects from a sound continuum, and then to compare and classify them. Morphology seeks to qualify (or describe) the objects' (Palombini, 1993: 66). In typology, sounds are classified as balanced (not too complex, not too simple) or unbalanced, and by their sustainment or iteration. The morphology is divided in seven categories: mass, harmonic timbre, grain, allure, dynamic, melodic profile, and mass profile.
Electrophones played an important role in the past century by allowing to produce new sounds and powerful outputs (which are especially useful for large concert venues and setups). Electrophones helped define new genres such as electronic music and experimental academic music, or popular genres like rock, pop, and many others. They not only changed the sound of music-and perhaps the taste of massive audiences-but also set new paradigms that helped us think about sound more deeply. The synthesiser, for instance, has allowed experts and non-experts to explore the spectromorphology of sound by liberating the instrument from the physicality of its body, as the sounds it can produce are not entirely determined by it. Moreover, by decoupling sound and body, electrophones have created a new paradigm that never existed before: a sound with no body. It does not need the body to be entirely removed, but it is not essential. According to the Hornbostel-

Sachs system, electrophones are
Instruments that use materials generating acoustic sounds, mechanically-driven signal sources, electronically stored data or electronic circuitry to produce electrical signals that are passed to a loudspeaker to deliver sound (MIMO Consortium, 2011: 21).
While there are still bodies involved (the instrument and the performer), their physicality is not necessarily responsible for the characteristics of the sound. A theremin, for instancehoused in a wooden box and outputting sound through an external speaker, having a body and needing the body of the performer-will not produce a different sound if its housing is replaced by a plastic one or removed. Its electronic components could be reassembled inside a sewing machine and, as long as the performer can interact with the pitch and volume antennas, the sound will be the same.
Despite the original action that triggers their sound, electrophones tend to rely on loudspeakers to output a sonorous event. Unlike the instruments classified in the other categories, electrophones' sounds do not necessarily maintain a constraint related to the relationship between the material, construction, and performance. Thanks to the invention of the loudspeaker (and other kinds of transducers), 'non-physical' electronic or virtually designed sounds can be produced, and sounds of physical origin can be reproduced without the presence of the vibrating body or instrument that would otherwise generate that sound.
Hornbostel and Curt Sachs in Zeitschrift für Ethnologie in 1914. This system was updated and published in 2011 by the Musical Instrument Museums Online Project (MIMO). This revision is available online at: http://www.mimo-international.com/documents/Hornbostel%20Sachs.pdf The term 'Software,' as included in the list, refers to a stand-alone software application running in a computer, for instance, and not to the software running in a microcontroller inside an instrument such as a digital instrument.

The concept of Bodyless Sound
The development of different recording techniques (and more recently electronic media and computers) made it possible to understand and explore sound in a different way, thus enabling the manipulation of sound to be reproduced through speakers. This paradigm produces a unique consequence: a bodyless sound. This concept refers only to the fact that sounds produced by the loudspeakers (or other electronic mediation), even when they have a physical origin, can also have the sound qualities of a body that is not present. For instance, in the case of a recording of an acoustic instrument, a synthesiser can produce a variety of timbres without making any physical modification to the instrument. Its sound is not subject to its physicality, it may be subject by the configuration of its electronic components, but this configuration could be presented in many shapes and housing bodies without a change in the sound. Similarly, a recording of a violin can be processed and, even though it has a physical origin, once it is reproduced there is not a body of a "modified violin". I should clarify that, while the speaker itself is the body producing the sound, I do not see it as the body of the sound since its physicality does not shape the characteristics of the sound-although the characteristics of the speaker will, to some degree, affect some aspects of it.
A bodyless sound, free of a body, can have almost any acoustic characteristic. However, it does not escape the definition of the sound object by Pierre Schaeffer (Schaeffer, 2017) and summarised by Chion (2009); in so many words, as a sound event perceived as a whole, which is not the sound body, nor a physical signal, a recorded fragment, a notated symbol on a score, or a state of mind. It transcends individual experiences and can be analysed giving its objectivity. It is the pure sound phenomena and the characteristics that define it and makes it recognisable as a whole.
While the essence of the bodyless sound is based on the fact that it is produced by some process, which is not tied to the physicality of a body (as explained before, most likely produced by a transducer), sometimes utilising sounds that have a natural origin (a recording, for instance). This concept differs from some sound ecologists' ideas such as Murray Schafer's concept of schizophonia. The bodyless sound is not preoccupied with a separation of the sound and the original body that produced it, as the concept of schizophonia does (Schafer, 1969: 43). The proposition of this concept is that sound becomes a physical entity on its own, even though it does not possess a body; it either was created without one (pure electronics) or has lost its original producer-body. It has a sort of ethereal body that has a spatiotemporal form and qualities. It is a sound object, and it occupies a space, having with a stronger 'density' right by its emitter (i.e., a speaker) where its amplitude is at its highest level.

Bodyless sound and the issue of embodiment
Freeing sound from the constraints of a physical correspondence can actually broaden creative possibilities. The most notable cases of creative approaches that take advantage of the bodyless sound are musique concrète and electronic music. Musique concrète consists of recorded sounds that are manipulated and reorganised in such a way that their bodily characteristics can be completely lost and replaced. Electronic music, produced with analogue or digital oscillators signal generation, is intrinsically bodyless. The 'un-natural' origin of electronic sounds creates the possibility of shaping their spectromorphology, thus giving these sounds a level of malleability that the ones produced with physical instruments do not possess. A potential problem, however, awaits in the horizons of these creative approaches, as the lack of a body producing the sound tends to cause reception problems with the audience.
Multiple techniques for creating musique concrète and electronic music works have been developed alongside new technologies that appeared in the last eighty years. The technical possibilities, as well as the many musical styles that emerged from them, are beyond the scope of this article. Nevertheless, the interest here is in reflecting on how the possibility of sound, that is not constrained by physicality, has changed the way in which we think about music. For example, spectral music, experimental electronic music, live coding, or noise music have been influenced by the study of and experimentation with the theory of sound phenomena. These cases, however, would be too large to cover in one article, so this study is focused on reflecting on how they have been influenced by bodyless sound, and in what defines the bodyless sound: the lack of a body.
When we think about a body producing sound, we think of a moving one. This makes sense as a static body would not naturally be able to expand its presence through sound. Such is the relationship between body and sound, about which interactions we have developed specific expectations. When we see a performer playing an instrument, for instance, we know what sound to expect if we see a certain performative action. In other words, a physical gesture corresponds to a musical one.
Often, physical gestures can express properties of music. These gestures, learned throughout our lives, gain, lose, or replace their meaning according to our experiences.
A movement such as lifting a hand can imply pitch ascending or volume rising. In a different context, however, it can be a cue between musicians, which predisposes the listener to expect an event that might be of significance to the overall shape of the work.
This movement, the expectation it triggers, and its consequence speaks to the listener and reinforces engagement. The formation of meaning takes place when synaesthetic or kinaesthetic transformations are present thanks to these interactions. As stated before, only suggestions of meaning will exist, but the listener will ascribe meaning to the observed interactions. In synaesthetic transformation, physical properties of the sound, such as frequency, duration, spectral density, and loudness, are assigned a representation that links to space, visual and tactile metaphors such as 'large,' 'heavy' and 'rough.' Through kinaesthetic transformation, the dynamics of sound properties afford impressions such as movement, gesture, tension, and release of tension (Leman, 2009: 128).
Let us now remember that the argument of this section is that some sounds are born bodyless. While physical gestures and sonic gestures are seamlessly tied in our minds-a loud sound corresponds to a large and strong gesture of the arm against a string while a gentle rustle does not-the reader should be aware that, in the following pages, references to physical gestures do not mean to focus the attention on the action but on the mental image of that action and its sounding correspondent. This, to observe how we make sense of a bodyless sound.

Noam Chomsky, however, in 'Human Language and Other Semiotic Systems' states that
To determine whether music, or mathematics, or the communication system of bees, or the system of ape calls, is a 'language,' we must first be told what counts as a 'lan-  (Chomsky, 2012: 430).

RICERCARE
Under this view, one can envision a system of codes embedded in a musical work which can act as a 'system of communication'. The question here is, how can a bodyless sound suggest any kind of gesture that will allow for embodiment to be the bridge that connects the work and the audience? How can a musical work, based on bodyless sounds, confront the issue of embodiment when performance or performative actions are hidden from the audience?
Marc Leman presents multiple views by which it becomes easier to understand how musical gesture can suggest meaning (Leman, 2008(Leman, : 1-26, 2009. According to the author, the suggestion of meaning can be provided by factors such as behavioural resonance (or 'entrainment,' as described by (Clayton, Sager, and Will, 2004) in which rhythmic synchrony is achieved by means of empathy between two subjects, in this case performer and audience, or between performers in the same ensemble. This phenomenon of physical and biological origins has a great impact on the reciprocity between the performer and the audience, and it 'may contribute to the "magic" atmosphere that facilitates direct involvement with what happens on the scene' (Leman, 2008: 5). Nevertheless, this 'spell' cannot be tampered with by introducing 'too much awareness,' as involving the mind in thinking and reasoning seems to break this line of communication established by entrainment. In other words, the implied meaning is one that is not (or should not be) thought of as something fixed by the intention of the composer or performer. Leman makes it completely clear that this suggestion is a subjectivist approach in which '[previous] experiences can provide a basis for speculative interpretations of how music feels and what it means' (Leman, 2008: 12).
Another important factor to suggest meaning through musical gesture comes from signification. In this case, unlike with entrainment, reasoning is involved. By previously introducing a verbal description or contextualisation, the musical content is viewed in a new light that offers novel possibilities for engagement and interpretation (of meaning).
These potential interpretations often link subjective experiences with a broader and historical context. In the words of Robert Hatten: 'The linkage between sound and meaning, though mediated by forms, is also mediated by habits of association that, when stylistically encoded, produce correlations, and when strategically earned (inferred through a stylistically constrained interpretive process) produce interpretations' (Hatten, 1994: 275). While cultural and historical context does not imply clarification of musical meaning, it facilitates access to music. Therefore, new music can benefit from good program notes, which help the listener to engage with it when no historical or cultural reference (such as form or tonality) is present.
An action-based factor, also described by Leman, implies that physical energies (namely movement, perceived visually or sonorously) have an impact on the body and mind. This impact brings signification through gesture. Corporeal articulations provide meaning through a process that involves accessing memory and empathy. The listener's musical involvement through corporeal engagement opens the possibility of interacting with music in many ways such as interactions based on mimesis; on goal-directed gestures that are culture-dependent; or those involving responses based on emotive, affective, or expressive capabilities centred on social interaction.
At this point, it becomes important to denote that in a traditional music making setting, there is more than one body involved in the musical experience: the body of the performer, the resonant body (the instrument), and the body of the listener. Nevertheless, in music employing bodyless sound, the first two become fused into a sounding entity, as physicalgesture-to-audio relationship becomes obscure due to the hidden sound production process.

Fernando Iazzetta states that
Actions such as turning knobs or pushing levers are current in the use of today's technology, but they cannot be considered gestures. Also, to type a few words in a computer's keyboard has nothing to do with gesture since the movement of pressing each key does not convey any special meaning. It does not matter who or what performed that action, neither in which way it was performed: the result is always the same (Iazzetta, 2000: 260).
This assertion puts music made with bodyless sound-electronic music-into a difficult position, as making this kind of music has traditionally meant that the interface tends to hide the process through which the sound is produced. Therefore, pressing a button is a gesture with no intrinsic meaning, and a multitude of possibilities. Furthermore, if the musical work is acousmatic, the problem becomes greater.
Codes, however, slowly become part of our common language and, as we get exposed to codes that fit a specific context, we can learn and give meaning to gestures, both aural and physical. For instance, the gesture of turning a knob in a kitchen relates to turning on an appliance. On the other hand, if at a night club we see a DJ turning a knob, we expect a specific musical result. If gesture is directly linked to the body, then gesture is directly linked to the medium-the instrument, physical or not (such as software).

RICERCARE
Revista del Departamento de Música -Grupo de investigación en Estudios musicales Núm. 15 (2022) A bodyless sound exists without having a direct connection to a body. However, it exists thanks to a medium-which paradoxically is a body, but not one that necessarily defines the bodyless sound with its own acoustic properties. In the following section, the importance of the medium is discussed to situate the bodyless sound in the context of creative works and musical-technological developments.
The re-embodied sound In Integrating Score, Performance, and Medium in the Work-Concept (Ramos Flores, 2021) a tripartite model that seeks to re-evaluate the importance of the medium in the ontology of the musical work-the work-concept-is presented. In this model, performance, score, and medium are the three fundamental elements that make the musical work, feeding, and empowering each other as a unity. Previous models, such as Jean Jacques Nattiez's (Nattiez, 1990: 73), focused on the poietic and esthesic process and failed to integrate the medium (see Figure 2), which makes sense in the historical context in which these models were devised. The medium, however, has become more important in today's music making practices, wherein music creators tend to develop their own mediums-software and hardware-as part of their creative process. 4 More information on the different versions of Rainforest can be found at https://davidtudor.org/Works/rainforest.html

Cristohper Ramos Flores
The bodyless sound and the re-embodied sound: an expansion of the sonic body of the instrument The medium, as proposed in the tripartite model, represents the object, or set of objects through which the musical work is realised. In this model, the medium itself has no extrapolation in the work and does not have the power to give the score the category of work without the performance. It is through gesture that performance empowers the medium. In a similar way, the performance alone does not have the possibility to translate the score out of the imaginary and place it in the real world. In this way, the three are intimately linked in the act of realisation of the musical work (see Figure 2).

Instrumentality in technologically devised music
Today, new tools, such as computers, high-performance recording systems, and powerful software, allow composers to study the phenomenon of sound in a different way, even allowing real-time manipulation of sound. With all these new paradigms, new problems have also arisen, such as the disembodiment of sound being generated by 'virtual sources', the lack of connection between performative gestures and electronic audio processes (such as the action of pressing a button which might not correspond with a complex sounding event triggered by that action), and mutations of sound through electronic processes. The unique quality that physicality gives to sound, and the grain, as Barthes calls it (1987:181), 7 are at the centre of these questions.
Live performance is fundamental for the dialogue between the creator and audience thanks to the personal aspect that physicality gives to sound. Roland Barthes believes that there is a unique aspect of music making that is exclusive of each performance. Barthes calls this the grain of the instrument or voice, which 'is not-or is not merely-the timbre; the significance it opens cannot better be defined, indeed, than by the very friction between the music and the something else, which something else is the particular language' (Barthes, 1987: 185). The grain, the unrepeatable characteristic of live sound and live 7 The idea of the grain can be interpreted in different ways: it can be found somewhere between the texture and the timbre of the singing voice, or perhaps between the language and the voice, or the bodily expressions that accompany a musical gesture and the meaning of poetry in the song. In any case, the grain comes from the unique materiality of the sound and the emergence of pleasure that one experiences while listening to a particular performance. performances, is 'the outcome not just of the physical nature of the instrument, but of its physical limitations' (Croft, 2007: 65). One cannot separate this physicality without affecting the perception of the grain, and thus affecting the dialogue.
Unfortunately, the grain is hidden when the process that gives music this characteristic is not evident. In the case of live electronic music, it becomes more difficult to recognise the grain, the uniqueness of the performance, when the medium does not have obvious limitations and most, if not all, of its processes happen 'inside a box.' John Croft says, in his article 'Theses on liveness,' that […] the limits of an instrument are essential to it being perceived as an instrument at all. A loudspeaker can, in principle, produce any sound; on an instrument, almost all sounds are impossible, and of those that are possible, some are more difficult to produce than others, and this difficulty is patent in the act of performance. This is surely why performance engages us in a way that cannot be accounted for in terms of the sound alone: the difficulty, the impossibilities, the encounter with limits, the finitude of the instrumental performance resonates with wider human experience (Croft, 2007: 62).
It is important to address this issue, especially when making music through electronic means, as the sound is designed and generated in a virtual environment and performers usually interact with sound using generic devices that are not necessarily designed to respond to a physical gesture that corresponds to a musical gesture. Interfaces (such as a computer keyboard, for instance) were not designed for musical purposes. They can mediate a physical gesture with a process that is not seen or heard by the listener, leading to any number of sounding results. While in such cases the interface takes the role of the instrument, it lacks true instrumentality.
David Burrows observes that a key feature of a musical instrument is to act as mediator between the performer's body and the sound they produce (Burrows, 1987). He uses the concept of instrumentality to describe the purpose of the musical instrument, theorising the concept of 'transitional object.' 8 For Burrows, instrumentality is the capacity of the instrument to be the means of physical expression, the mediator 'between the material and the immaterial' (Hardjowirogo, 2017: 15)be it their appearance, their technical functionality, their playing technique, or their sounds. And as they have changed, so too 8 Based on the 'transitional objects' concept by D.W. Winnicott, which represents 'childhood articles such as blankets or stuffed toys that have a constructively ambiguous status between the small child's self and his or her emergent sense of otherness' (Burrows, 1987: 121). have our understandings of what a musical instrument is. The lacking precision of the current notion of the instrument and its incompatibility with contemporary instrumental forms are consequences of a technocultural process that raises fundamental questions about the identity of the musical instrument: When (and why. Philip Auslander, on the other hand, argues that an important aspect of instrumentality is found in the skill and technique involved in instrumental performance (Auslander, 2017). In other words, effort is a key feature of instrumentality. Philip Alperson contrasts these ideas with the 'intentionality' of the instrument and says that the character, so to speak, of musical instruments-their typical uses, the way they have come to be played and thought of in the history of music-is often rooted in the technical development of the physical instrument and its corresponding musical possibilities (Alperson, 2008: 41). This means that there is instrumentality implicit in the intention embedded in the object by the luthier, programmer, performer, or composer, as instruments-especially those 'interfaces' used in electronic and digital music-making-cannot be reduced to mere material objects when they are used to make music.
Croft, who supports the idea that 'the difficulty, the impossibilities, the encounter with limits, the finitude of the instrumental performance resonates with wider human experience,' proposes a set of 'conditions for instrumentality' (Croft, 2007: 64) in instruments or interfaces mediating the performer's actions and the hidden processes embedded in electronic or digital mediums: -The response of the computer must be proportionate to the performer's action.
-The response must share some energetic and morphological characteristics with the performer's action.
-The onset of the response must be synchronous with the performer's action.
-There must be a timbral continuum, affinity, or fusion between the untreated instrumental sound and the response of the electronics.
-The relationship between the performer and the computer must be stable.
-The relationship must be scrutable.
-The relationship must be learnable by the performer.
-The mapping must be sufficiently fine-grained.
Croft's conditions resonate with the ideas of Burrows, Auslander, and Alperson. In addition, he finds instrumentality to be threatened by the disembodied sound coming out of a loudspeaker rather than the instrument itself. He draws attention to the aspect of 'liveness' in live performance-more specifically live electronics. Croft justifies the proposed conditions with the link between the performer's actions and the computer's response-although this could be the case with any instrument that does not necessarily rely on a computer-which would strengthen instrumentality and its 'resonance with a wider human experience.' Considering this meeting-point of perceptual experiences, we can say that instrumentality is the first step towards re-embodiment of an otherwise bodyless sound.

The medium as a creative exploration
While the tripartite model is the first attempt to consolidate the medium as an essential component of the musical work-especially considering the new creative practices of building instruments and software-the genesis of this idea comes from recognising a long history of creative minds exploring and building new mediums. The creation and development of musical instruments is a large field of research, which cannot be covered in one article. However, it is worth discussing the prior state of the art in the augmented instruments field to understand better the concept of re-embodied sound. Many kinds of electronically augmented instruments-also described as meta or hyper instrumentshave been designed. A few of the seminal works that made an effort to give a body to a bodyless sound are discussed in the following pages.
Tod Machover led the first major research on hyperinstruments. His work began at the

Institut de Recherche et Coordination Acoustique/Musique (IRCAM) with a research
on 'the adaptation of the computer to the needs of sophisticated real-time musical performance' (1978) as well as 'investigating the technical and scientific problems of collecting, analysing, and interpreting musical data' after the establishment of the MIDI standards (early 1980s) (Machover, 1992). His work continued over the following years at the Massachusetts Institute of Technology (MIT), leading to the creation of systems named hyperinstruments, 'a combination of machine-augmented instrumental technique, knowledge-based performance monitoring, and intelligent structure music generation' (Machover, 1986).
A new field of research was created with the work of Machover and the Hyperinstrument Group at the MIT Media Lab. This field encompasses any research involving the use of acoustic instruments-or any of their parts-in conjunction with new technologies, aiming to expand the performative and sounding affordances that traditional instruments provide.

Out of the Hyperinstrument Research Group at MIT, other seminal instruments and
approaches inspired future generations. Hand tracking devices, for instance, were developed for Machover's Bug-Mudra to track performing gestures-similar to those found in conducting practices-to shape sound. This approach was necessary as instruments already developed at the time did not allow for continuous tracking of motion. This need put in a new perspective the importance of embodiment in performance, as well as the possible avenues for mapping gestures into parameter control.
Diana Young, on the hand, developed the Hyperbow, an electronically augmented bow capable of collecting data from the bowing technique. This data was then mapped to control various parameters needed to process audio (see Figure 3). Such device focuses on performance aiming to tap into performing gestures that would usually have a sounding result, but re-mapping them into parameter control. Cook, developed as part of a reimagination of the traditional concept of the violin (see Figure 5). Their project also delved into developing a novel approach for sound emission with the invention of the BoSSA (Cook and Trueman, 2000).
In the case of electrophones, the tendency is to rely on a 'bodyless sound system,' in which a loudspeaker is separated from the instrument from which the sonic result of the performance emerges. This can be both beneficial and problematic depending on the situation: electric guitars in a rock concert, for instance, might benefit from the use of an external PA directed at the audience, while using a PA facing the audience and no monitor speakers might make synchronisation between performers difficult. To address the issues of sound disembodiment, multiple solutions have been developed ranging from careful PA setups to clever loudspeaker arrays. An example of these solutions are hemispherical loudspeakers originally developed by Dan Trueman-the BoSSA (Cook and Trueman,

Cristohper Ramos Flores
The bodyless sound and the re-embodied sound: an expansion of the sonic body of the instrument 2000)-with further iterations detailed by Scott Smallwood (Smallwood et al., 2009). In these cases, while the system remains 'bodyless,' the effect obtained is that the sound is perceived as emerging from the instrument, and in multiple directions, as would naturally happen (to some degree) with an acoustic instrument. In these projects, an effort to re- configure the core elements of a violin makes evident the 'instrumentality,' while allowing for remapping the physical gestures into parameter control with highly expressive results. At the same time, the integrated sound system pushes the produced bodyless sounds towards a new perception in which they almost lose their bodyless profile, as the connection between performance, emitter body, and sound source fuse together.
Some electrophones include integrated loudspeakers, such as electronic keyboards that can include simple one-loudspeaker setups, or more complex ones such as the Yamaha's CVP-809GP Clavinova piano which features multiple speakers situated in a grand piano body, thus producing a more natural sound. 9 Cook and Colby's SqueezeVox (Bart and Lisa), for instance, presents a design in which an accordion has been modified to fit and work using an embedded electronic system to control vocal synthesis models. In this case, while the instrument retains its original appearance and, to some degree, mode of performance, an electronic system (that includes an onboard speaker) is in charge of producing the sound (Cook and Lieder, 2000). Similarly, Cook's Nukulele'elua takes the body of a ukulele. This instrument features two speakers, one facing outward and the other one facing towards the performer. It also features two FSRs allocated over the finger board and bridge. The performer interacts with these sensors as if pressing and plucking the strings. While the instrument retains the appearance of the acoustic ukulele, the way it produces sound has been replaced by an electronic system (Cook, 2003). The body of these instruments does 9 More information about the Yamaha CVP Clavinova piano series can be found at https://usa.yamaha.com/products/musical_instruments/pianos/clavinova/cvp-series.html not provide acoustically meaningful characteristics to the sound but, by virtue of including an onboard sound system, it gains more credibility in the gesture-to-sound relationship and its perception.
The previously mentioned developments were special in their time, as hyperinstruments have traditionally been built under the bodyless sound system paradigm. In recent developments, however, there has been a tendency towards hybrid designs. Some examples include the hybrid string quartet developed by Juan Arroyo, where microphones and transducers are installed by a luthier inside the instruments. The signal captured by the system is then processed by a computer and output back into the instrument's body, turning it into the source from which the sound emerges (Houlès, 2017: 29-32). The effect of the bodyless sound is then fully neutralised as, even though synthesis is present, there is a direct relationship between performative gestures and musical gestures, as well as a correspondence with the acoustic characteristics of the resonant body and the audio output through the same resonant body.
Other similar augmented instruments make use of the instrumental body's resonance to amplify the processed signal, 10 of which Laurel Pardue's Svampolin is a notable example by taking the concept of re-embodiment to a new level. In Pardue's instrument, an electric 'stick' violin provides the input signal, which is processed through a Bela board and output via a transducer mounted directly on a violin body with no fingerboard or strings.
All elements-fingerboard, Bela board, transducer, and violin body-are assembled as one violin. However, unlike in the case of a real instrument, there is not a direct acoustic interaction between the strings in the stick violin and the body, as a thick layer of foam separates them (see Figure 5). The result is an instrument that feels and responds like a violin but has the capability of sounding like any other instrument through synthesis embodied in a real violin setup (Pardue et al., 2019). This setup's outcome is a new paradigm where bodyless sound is re-embodied in such way that the resulting sound is both bodyless and not. It has a synthetic origin but corresponds to the action of the performers body and carries the acoustic characteristics of the resonant body. The sound is not separated from the body, but it is not fully produced by it. In terms of expressivity and instrumentality, it corresponds to that of a real violin while having the possibility to re-map performative gestures through parameter control. In the case of aerophones, the production of sound is not based on the vibration of the body or one of the elements of the instrument, but on the air column moving through its body. Complex setups have been developed to produce sound with aerophones for research 11 and creative 12 purposes. Unfortunately, these systems override the ability of a performer to play the acoustic instrument at the same time. Transducers cannot be mounted on aerophones as they are not effective resonators. Hybrid sounds where acoustic and synthesis are joined in the body of the aerophone are difficult to achieve, as components inside the instrument have a negative effect on the air column, and a second air column-with synthesised sound-is impossible to be introduced. If a saxophonist, for instance, is to perform the acoustic instrument at the same time as the synthesised sound, it is necessary to insert sound waves directly into the instrument using a loudspeaker.
The IMAREV project, directed by Adrien Mamou-Mani at IRCAM, amongst other achievements, allowed its collaborators to explore multiple ways of introducing synthesis into the body of acoustic instruments. As part of this project, Thibaut Meurisse studied the effects of applying active control to wind instruments. It consists of 'a way of modifying the way in which a mechanical system vibrates, with the help of sensors, actuators and a controller' (Meurisse, 2014: 4). 13 The author developed a trombone mute (Meurisse,11   The sound of a hyperinstrument, like any other electrophone, can be redesigned. With this potential to become something different, hyperinstruments are similar to a painter's canvas.
A canvas has a defined shape, limits, and texture, within which strokes and colours can form a new creation. Similarly, a hyperinstrument has limits, defined possibilities, physical and technical characteristics. They can be the space through which sounds and timbres can be reconfigured even before a concrete musical idea is realised with these new sounds.
The ability to sculpt-or paint, in the canvas metaphor-sound via electronic means opens up new horizons of possibilities; there, the composer not only composes with sounds but composes the sounds themselves (Risset, 1994), a feature found at the core of electronic music.

A set of configurations for sound re-embodiment and expansion of the sonic body of a hyperinstrument
The multiple audio emission sources available within the developments described in the previous section, as well as the search for sound re-embodiment and new physicality, The Un-mute, one of the components of the HypeSax-which sits inside the bell of the saxophone and features a loudspeaker-provides a solution for integrating a sound system into the body of the instrument. At the same time, the Un-mute, using a USB connection to a second sound system, allows for the saxophone to expand its sonic body beyond its physical space by adding external emission sources. In this way, the hybrid capabilities of the HypeSax offer three different configurations of sound embodiment: a) embodied sound, b) disembodied sound, and c) bodily extended sound (see Figure 7)-a model which, perhaps, is worth considering in hyperinstrument design. All these configurations allow for multiple combinations of acoustic, synthesised or hybrid sound embodiment (resembling the multiple combinations used in Suárez Cifuentes' Libellule).
To achieve sound re-embodiment and expansion of the sonic body of the instrument, the following setups were devised for the HypeSax: A. The embodied sound configuration offers three sonic possibilities with different ways of relating with the body of the instrument or the body of the performer: • The first and most basic one is based on the fact that the instrument, being an actual acoustic saxophone, produces acoustic sound on its own without the intervention of the electronic components.
• With the use of the Un-mute-and other sensors mounted on the instrument, or acoustic feedback-bodyless sound is emitted directly from the bell of the saxophone. This creates the sonic perception that allows for the re-embodiment of the sound • A third possibility combines the acoustic sound and the synthetic sound-now re-embodied-to produce a hybrid sound. This hybridity can produce a range of sound effects from the combination of unrelated sounds, multiphony, and new timbres.
B. Disembodied sound is achieved by extending the sound via a second audio system • With this configuration, the second sound source can produce synthesised or hybrid sound that differs from sound produced simultaneously by the acoustic saxophone. • This configuration also allows for synthesised audio to be emitted from the second source without any sonic emanation from the original instrument.
C. Finally, the bodily extended sound can, in a traditional fashion, duplicate, amplify or stream to different locations-via remote networks-acoustic, synthesised, or hybrid sound.
This range of setups not only allows for the expansion of the sonic body of the instrument, but also affords new possibilities for the gesture-to-sound relationship and its perception.

Cristohper Ramos Flores
The bodyless sound and the re-embodied sound: an expansion of the sonic body of the instrument

Final thoughts
Throughout the last hundred years, our world and our relationship to it have experienced a rapid change. With the development of analogue recording/reproducing audio techniques, electric technologies, electronic devices, and, more recently, integrated circuits and complex electronic systems, it became possible to generate new sound worlds which have never been heard before. As we create, understand, and explore new avenues for sonic creation and music making, the creative possibilities are expanding constantly and substantially. With this, the relationship between the vibrating-resonant body and the sounds it produces has shifted to new paradigms that offer unexplored sonic horizons but also creates different challenges.
In musical performative practices, disembodiment (which is a problematic topic on its own) is often the reason why some music is alienated from bigger audiences. It becomes evident that, because of the current creative trends for the integration of synthetic and virtually created sonic possibilities, the relationships of the body-to-musical-instrument, musical-instrument-body-to-sound, musical-instrument-to-audio-emitter (and variations between them) must be discussed and, perhaps, understood within a theoretical framework that may aid us in better understanding the implications of the bodyless sound and re-embodied sound.
The proposed set of configurations for sound re-embodiment and expansion of the sonic body of a hyperinstrument sits within a larger discussion that has been in the minds of several creators and researchers. This proposition offers a simple but effective way to think about creative possibilities and organise the way we understand a sonic body-an important component of music making and audio design, as demonstrated by a large creative world and industry devoted to sound localisation and their many techniques such as concert stage design (using real instruments), stereo panning, 3D spatialisation, multichannel arrays, ATMOS technologies, binaural audio, and many more. This proposition, as exemplified by the HypeSax, is an effective approach to integrating these sonic spatialisation techniques in hyperinstrument or electronic musical instrument design. At the same time, this set of configurations offers new possibilities for the exploration of embodiment and its impact in performance practices and audience reception.
A longer and more thorough study could assess the implications of the topics discussed in this article for current and future musical practices. Future lines of research related to the bodyless sound, sound re-embodiment, and expansion of the sonic body can be envisioned.
They include cultural implications of the sonic voice of musical instruments and cultural background beliefs that surround them (for instance, in the case of Taonga Pūoro, New Zealand's Maori traditional instruments, musical instruments evoke the spirits of nature and deities and their unique voices); re-embodiment of modified audio (music concrète) in new bodies and new signification; the place of the bodyless sound and sound re-embodiment in the proposed tripartite model for the work-concept; or new and future technologies and their implications for embodiment, disembodiment and re-embodiment of sound in performative practices. The scope of these topics includes a wide range of possibilities; therefore, an invitation remains open to reflect on the reader's own musical practice and experience through the questions presented in this article.