A Virtual Audio Cable - An Audio Loopback Driver. LoopBeAudio is a virtual audio device to transfer audio between computer programs, digitally, without any quality loss. The technical limits are only restricted through the Operating System. Configure up to 24 audio channels, a sample rate from 8000 Hz to 384000 Hz with a bit depth from 8 bit to. Loopback 1 1 is used to point the software to the local machine. 2 and later Oracle Database Exadata Express Cloud Service - Version N/A and later. BeyondTrust's leading remote support, privileged access, and identity management solutions help support and security professionals improve productivity and security by enabling.
From the ALSA wiki
(Redirected from Asoundrc)
Jump to: navigation, search
Contents
Warning
Neither .asoundrc or /etc/asound.conf is normally required. Youshould be able to play and record sound without either (assuming yourmic and speakers are hooked up properly). If your system won't workwithout one, and you are running the most current version of ALSA, youprobably should file a bug report.
What is a .asoundrc file? Why might I want one?
The .asoundrc file (in your home directory) and /etc/asound.conf (forsystem-wide settings) are the configuration files for ALSA drivers.Neither file is required for ALSA to work properly. Most applicationswill work without them. The main use of these two configuration files isto add functionality such as routing and sample-rate conversion. Itallows you to create 'virtual devices' that pre or post-process audiostreams. Any properly written ALSA program can use these virtual devicesas though they were normal devices.
Before ALSA 1.0.9 you often needed one to make things work at all, andbefore ALSA 1.0.11 you generally needed one if you wanted to have morethan one ALSA application output sound at the same time via theDmixPlugin, but current ALSA versions shouldn't need one.
There are several uses for a .asoundrc. One is to create a personalizedconfiguration for a soundcard. This is useful if you have a 32-channelsoundcard and want to reserve 5 channels permanently for recording thedrums. For example, you could create a new PCM called 'drumrec' that isalways mapped to the same five inputs. The .asoundrc file is quitecomplicated. You may find it simpler to use the dmix plugin. Also seethe page Dmix Plugin.
Lastly similar .asoundrc files are used internally by ALSA to 'map'standard things, for example, to connect 'default' to 'plughw:0' (thistoo could be overridden). The configuration is in the file/usr/share/alsa/alsa.conf
Loopback Audio Mac
If you're happy with how your card is working, there's no need to use an.asoundrc. There is also not much use in taking an .asoundrc for a10-channel soundcard and hoping to get more out of a 2-channel (stereo)soundcard. But if you want or have to customize the behaviour of asoundcard to be different from the standard setting, an .asoundrc isessential.
In the discussion that follows, remember that anything mentioned for the.asoundrc file applies equally to /etc/asound.conf except in the lattercase the virtual devices you define can be used by all users on asystem.
A brief example.
Put the following in a file in your home directory in a file named.asoundrc:
You should replace the name of the card (card0) with something useful,e.g. the one you are using in your /etc/modules.conf file (e.g. cmipci).
This example creates a virtual device named card0 (or whatever youreplaced that with) that just connects directly to your hardware's PCMoutput channels. It also creates a control device with the same name.Neither of these virtual devices actually does anything interesting --they just act as aliases for the hardware devices.
Where does .asoundrc live?
The .asoundrc file is typically installed in a user's home directory($HOME/.asoundrc) and is called from /usr/share/alsa/alsa.conf. It isalso possible to install a system-wide configuration file as/etc/asound.conf. When an ALSA application starts both configurationfiles are read but the settings in the .asoundrc file override thesettings in the /etc/asound.conf settings.
Changing things
Most programs require a restart to reread .asoundrc or asound.conf! Thisincludes desktop environment audio daemons, such asPulseAudio. For most changes to .asoundrcyou will need to restart the sound server (ie. sudo/etc/init.d/alsa-utils restart) for the changes to take effect.
Default PCM device
Using aplay -L
you can get a List of existing PCM output devices. Ifyou want the default to be, for example, a USB Device instead of theonboard sound, you can place a pcm.!default line in the .asoundrc. Sayaplay -L lists something like
you can put the following line in your .asoundrc
As a result, most if not all applications will now use this device foroutput unless specified otherwise. The same applies for self-defineddevices, as shown below.
The naming of PCM devices
A typical asoundrc starts with a 'PCM hw type'. This gives an ALSAapplication the ability to start using a special soundcard (plugin orslave) by a given name.
Without this, the soundcard(s) must be accessed with names like 'hw:0,0'or 'default'. For example:
or with aplay
The numbers after hw: stand for the sound card number and the devicenumber. A third number can be added (hw:0,0,0) for the sub-devicenumber, but it defaults to the next sub-device avaliable. The numbersstart from zero, so, for example, to access the first device on thesecond sound card, you would use hw:1,0.
The keyword 'default' will access the default subdevice on the defaultsoundcard, which will probably be hw:0,0 for a typical single SoundBlaster sound card. Now with the 'PCM hw type' you are able to definealiases for your devices. The syntax for this definition is:
Here is another example which gives the first soundcard an alias:
Now you can access this card by the alias 'ens1371'.
This definition is helpful if you want to apply any further plugins orslaves in .asoundrc.
Plugins
What are plugins? A plugin (or plug-in) is a computer program that can,or must, interact with another program to provide a certain, usuallyvery specific, function. Typical examples are plugins to displayspecific graphic formats (e.g., SVG if the browser doesn't support thisformat natively), to play multimedia files, to encrypt/decrypt email(e.g., PGP), or to filter images in graphic programs. The main program(a web browser or an email client, for example) provides a way forplugins to register themselves with the program, and a protocol by whichdata is exchanged with plugins. Seehttp://www.alsa-project.org/alsa-doc/alsa-lib/pcm_plugins.htmlfor the list of all plugins.
Now define a slave for this plugin. A very simple slave could be definedas follows:
*What is a slave
in the first place? What does it do, what does itmean, what for do I need it? Can please'*****add some more light onthis? The slave is the device that is controlled by the plugin, andrecieves the plugin audio output in the case of playback, or providesinput for recording.
This defines a slave without any parameters. It's nothing more thananother alias for your sound device. The slightly more complicated thingto understand is that parameters for 'pcm types' must be defined in theslave-definition-block. Let's setup a rate-converter which shows thisbehaviour.
Now you can use this newly created (virtual) device by:
This automatically converts your samples to a 44.1kHz sample rate whileplaying. It's not very useful because most players, and ALSA, convertsamples to the correct sample-rate that is supported by your sound cardbut you can use it for a conversion to a lower static sample-rate, forexample. A more complex tool for sample conversions is the PCM type'plug'. The syntax is:
We can use it as follows:
By calling it with:
you can convert the samples during playing to the sample format:S16_LE, one channel and a sample-rate of 16 kHz. If you use aplay withthe verbose option -v you will see the settings from the original file.For example,
will show the original settings of the sound file test.wav. If you addthe definition 'route_policy average' to the plug definition, you willmake your output channel be the (arithmetic) average of your inputchannels.
Loopback 1 2 – Route Audio Between Applications Using Visual
Splitting front and rear outputs
I had a lot of trouble first figuring out how I could split front andrear channels into two devices that could be used independently. Thefollowing .asoundrc file is what I came up with. It can be used with'mplayer', for example, as follows:
Enjoy..
Note, for ttable you might use fractions but then you cannot useLC_NUMERIC locales that use characters other than '.' as decimalseparator. Actually this is a bug and has already been fixed in versionshigher than 1.0.8.
Joining devices to make multichannel
If your card has a number of stereo sub-devices that operatesynchronously, you can join them into one virtual multichannel device.
See the documentation for the multi plugin athttp://www.alsa-project.org/alsa-doc/alsa-lib/pcm_plugins.html
The following joins two adjacent sub-devices into a 4 channel device.There are 3 optional parameters [card,device, first_subdevice]. It isbasically a nested set of plugins: {route {multi {hw0 hw1}}
Eg. ttable4:1,0,2 will join sub-devices 2 and 3 of device 0 of card 1.You can use this device with JACK.
Converting Sample Rates On Input
This will take an input of any rate and convert it to 48000 hz, changeto suit your needs.
Dupe output to multiple cards
In this example an intel8x0 (ICH6 @ hw:0) and an Aureon 5.1 USB card(Audio @ hw:1) are used. The default device is a stereo device, theaudio stream is duped to both cards. Front left/right is copied to rearleft/right, respectively, and center and sub-woofer are mixed 50%/50%from front left/right. Dmix is enabled on both cards.
Downmix stereo to mono
Fromhttp://superuser.com/questions/155522/force-downmix-to-mono-on-linux/155601
Simple script to create an .asoundrc file
Other documentation of the .asoundrc file
For a detailed description of the syntax of the .asoundrc
file, seebelow and also check out the asoundrc.txt
file in the alsa-libpackage. JoernNettingsmeierposted an .asoundrc
to the linux-audio-users mailing list to use twocards as one. If you are interested ina more advanced .asoundrc
example have a look at the RME Hammerfall.asoundrc filecreated by JeremyHall,or squisher's asoundrc (with onecard as two, skype upmixing, ekiga hacks and wine testing).
Retrieved from'http://alsa.opensrc.org/.asoundrc'
Categories:Documentation |Configuration
A recurring need at fuse* is the ability to have a strong interaction between audio and video. The starting point is to have the possibility to have the audio signal inside the generative software (usually written in OpenFrameworks).
For this reason we have studied several possibilities to have audio rewire for OFenvironment available both for Mac OS and Windows 10.
First we created a base OF example to read multi-input audio channels.
MultiAudioInExample
MultiAudioInExample is an extension of audioInputExample (included in OF workspace) with the possibility to see more than 2 input audio channels.
You can setup the number of input channels and the buffer size changing the const variables declared in ofApp.h
and the other parameters of ofSoundStream
Loopback 1 2 – Route Audio Between Applications Using Itunes
ofSoundStream, after setup, called the method audioIn using ofSoundBuffer
alternatively you can use the float buffer directly (now is commented). In my experience it's more boring but it solved me an annoying vector error in Window 10 using audio jack.
#Audio rewire
Software compatibility is something very frustrating. For this reason we divided software for compatibility, usually we work on MacBook Pro with Mac OS and our clients use Windows OS machine.So, from the bottom of my heart - I apologise to linux users (which I admire every day) - but I took into consideration only Mac OS and Windows.
Some other compatibility is available, but in this post we will only consider the combinations that we have personally tested.
Loopback
Loopback is the best ready to go software for Mac OS, the natural successor of SoundFlower.
Pros | Cons |
---|---|
Easy to use and very easy configuration | It's not free, you need to buy the licence |
Continuous updates | It's not open source |
Autostart | Available only for Mac OS |
Unlimited virtual channels (tested 12x channels) | Static routing audio (you cannot select each application) |
We use Loopback for our interactive show Dökk and we are very satisfied: you need just select Loopback as audio device and create virtual channels.
SoundFlower
We used often SoundFlower in the past, but now we use only Loopback. So I can't explain if it is still available for the last Mac OS, but seeing the github repository it's not updated from several years.
Enolsoft pdf compressor 3 3 0 wiring. ##Jack Audio##
JACK Audio is 'a professional sound server daemon that provides real-time, low-latency connections for both audio and MIDI data between applications that implement its API'.
Pros | Cons |
---|---|
Free | Not ready to go, but there is a useful FAQ for Windows |
Open source | It doesn't work with all audio software [1] [2] |
Unlimited virtual channels (tested 12x channels) | |
Very flexible routing audio for each application |
[1] JACK (in windows) works with PortAudio driver so audio application have to use that driver. In my experience JACK with ASIO4ALL, the sound was generated using Ableton Live 10.
[2] In Live 10 select Driver Type: ASIO4ALL, Audio Output Device: JackRouter.
JACK with OpenFrameworksI tested with success OF 0.9.8 in a Windows 10 machine using Visual Studio 2015. After started the OF app, you can see the name of application in the Jack Audio Connection Kit and you can connect (using mouse dragging) the Output to the Input. You can create persistent connection using PatchBay.
Dante Via
After numerous researches I finally found an alternative to Loopback for Windows, this is Danta Via.
Pros | Cons |
---|---|
Easy to use and very easy configuration | It's not free, you need to buy the licence |
Available for Windows and Mac OS | It's not open source |
Very flexible route audio for each application | Limited to 16x16 channels |
Possibility to route sound over network [3] |
[3] Very interesting, but I did not have the chance to test it.
An interesting research is send audio samples buffer over network. We tried through OSC messages between Max/MSP and OpenFrameworks, but we met several problems. If you have some suggestion or ideas, you are welcome!