What is dev dsp in linux

How to know what is the default audio device? /dev/audio or /dev/dsp in ubuntu?

Can anybody give me some hints on what needs to be done? tl;dr How do I get ALSA to play raw audio data? Modes, parameters, capabilities, access type, sample format, sample rate, number of channels, number of periods, period size.

How to know what is the default audio device? /dev/audio or /dev/dsp in ubuntu?

I was trying to play random songs command line using mpg123. But did not know what is my default audio device. or if I need to specify any other audio device with the ‘-a’ option. Both /dev/audio and /dev/dsp are audio devices. How to know which one is default ?

The default sound system is ALSA and /dev/audio or /dev/dsp (OSS interfaces, deprecated in Linux) is only an emulation layer over ALSA — and not even a fully functional one, at that.

The default ALSA device is » default «, and if you install mpg123-alsa, it should Do The Right Thing without requiring any options.

DEV — Bass Down Low (Explicit) ft. The Cataracs (Official, «US fans you can get the Dev — Bass Down Low remixes from iTunes here:http://itunes.apple.com/us/album/bass-down-low-the-remixes …

DEV — Bass Down Low (Explicit) ft. The Cataracs (Official

«US fans you can get the Dev — Bass Down Low remixes from iTunes here:http://itunes.apple.com/us/album/bass-down-low-the-remixes …

Chillstep Music for Programming / Cyber / Coding

Could music boost your coding performance?Monotonous keyboard clicks can make you lose your train of thought. Our minds are tricky and given the …

DEV — In The Dark (Official Music Video)

Best of Dev : https://goo.gl/Dttp3USubscribe here: https://goo.gl/aLTi7yNew Album Early 2012Music video by DEV performing In The Dark. (C) 2011 Universal Repu

ALSA equivalent to /dev/audio dump?

This will be my poorest question ever.

On an old netbook, I installed an even older version of Debian, and toyed around a bit. One of the rather pleasing results was a very basic MP3 player (using libmpg123), integrated for adding background music to a little application doing something completely different. I grew rather fond of this little solution.

In that program, I dumped the decoded audio (from mpg123_decode() ) to /dev/audio via a simple fwrite() .

This worked fine — on the netbook.

Now, I came to understand that /dev/audio was something done by OSS, and is no longer supported on newer (ALSA) machines. Sure enough, my laptop (running a current Linux Mint) does not have this device.

So apparently I have to use ALSA instead. Searching the web, I’ve found a couple of tutorials, and they pretty much blow my mind. Modes, parameters, capabilities, access type, sample format, sample rate, number of channels, number of periods, period size. I understand that ALSA is a powerful API for the ambitious, but that’s not what I am looking for (or have the time to grok). All I am looking for is how to play the output of mpg123_decode (the format of which I don’t even know, not being an audio geek by a long shot).

Читайте также:  Linux файл initrd img

Can anybody give me some hints on what needs to be done?

How do I get ALSA to play raw audio data?

There’s an OSS compatibility layer for ALSA in the alsa-oss package. Install it and run your program inside the «aoss» program. Or, modprobe the modules listed here:

Then, you’ll need to change your program to use «/dev/dsp» or «/dev/dsp0» instead of «/dev/audio». It should work how you remembered. but you might want to cross your fingers just in case.

You could install sox and open a pipe to the play command with the correct samplerate and sample size arguments.

Using ALSA directly is overly complicated, so I hope a Gstreamer solution is fine to you too. Gstreamer gives a nice abstraction to ALSA/OSS/Pulseaudio/you name it — and is ubiquitous in the Linux world.

I wrote a little library that will open a FILE object where you can fwrite PCM data into: Gstreamer file . The actual code is less than 100 lines.

FILE *output = fopen_gst(rate, channels, bit_depth); // open audio output file while (have_more_data) fwrite(data, amount, 1, output); // output audio data fclose(output); // close the output file 

I added an mpg123 example, too.

Here is the whole file (in case Github get’s out of business 😉 ):

/** * gstreamer_file.c * Copyright 2012 René Kijewski * License: LGPL 3.0 (http://www.gnu.org/licenses/lgpl-3.0) */ #include "gstreamer_file.h" #include #include #include #include #include #ifndef _GNU_SOURCE # error "You need to add -D_GNU_SOURCE to the GCC parameters!" #endif /** * Cookie passed to the callbacks. */ typedef struct < /** < file descriptor to read from, fd to write to >*/ int pipefd[2]; /** Gstreamer pipeline */ GstElement *pipeline; > cookie_t; static ssize_t write_gst(void *cookie_, const char *buf, size_t size) < cookie_t *cookie = cookie_; return write(cookie->pipefd[1], buf, size); > static int close_gst(void *cookie_) < cookie_t *cookie = cookie_; gst_element_set_state(cookie->pipeline, GST_STATE_NULL); /* we are finished */ gst_object_unref(GST_OBJECT(cookie->pipeline)); /* we won't access the pipeline anymore */ close(cookie->pipefd[0]); /* we won't write anymore */ close(cookie->pipefd[1]); /* we won't read anymore */ free(cookie); /* dispose the cookie */ return 0; > FILE *fopen_gst(long rate, int channels, int depth) < /* initialize Gstreamer */ if (!gst_is_initialized()) < GError *error; if (!gst_init_check(NULL, NULL, &error)) < g_error_free(error); return NULL; >> /* get a cookie */ cookie_t *cookie = malloc(sizeof(*cookie)); if (!cookie) < return NULL; >/* open a pipe to be used between the caller and the Gstreamer pipeline */ if (pipe(cookie->pipefd) != 0) < close(cookie->pipefd[0]); close(cookie->pipefd[1]); free(cookie); return NULL; > /* set up the pipeline */ char description[256]; snprintf(description, sizeof(description), "fdsrc fd=%d ! " /* read from a file descriptor */ "audio/x-raw-int, rate=%ld, channels=%d, " /* get PCM data */ "endianness=1234, width=%d, depth=%d, signed=true ! " "audioconvert ! audioresample ! " /* convert/resample if needed */ "autoaudiosink", /* output to speakers (using ALSA, OSS, Pulseaudio . ) */ cookie->pipefd[0], rate, channels, depth, depth); cookie->pipeline = gst_parse_launch_full(description, NULL, GST_PARSE_FLAG_FATAL_ERRORS, NULL); if (!cookie->pipeline) < close(cookie->pipefd[0]); close(cookie->pipefd[1]); free(cookie); return NULL; > /* open a FILE with specialized write and close functions */ cookie_io_functions_t io_funcs = < NULL, write_gst, NULL, close_gst >; FILE *result = fopencookie(cookie, "w", io_funcs); if (!result) < close_gst(cookie); return NULL; >/* start the pipeline (of cause it will wait for some data first) */ gst_element_set_state(cookie->pipeline, GST_STATE_PLAYING); return result; > 

Combining the comment of Artefact2 (using aplay for output) and the answer of kay (using pipe() , which I had not touched before), I came up with this «minimal» example. For the ALSA version, it creates a pipe, forks an aplay process with the appropriate parameters, and feeds the decoded audio to it.

Читайте также:  Linux как вторая операционная система

Using lots of assert() to show the error codes associated with the individual function calls. The next step, of course, would not be the adding of -DNDEBUG (which would make this program really quick and really useless), but the replacing of the asserts with appropriate error handling including human-readable error messages.

// A small example program showing how to decode an MP3 file. #include #include #include #include #include int main( int argc, char * argv[] ) < // buffer and counter for decoded audio data size_t OUTSIZE = 65536; unsigned char outmem[OUTSIZE]; size_t outbytes; // output file descriptor int outfile; // handle, return code for mpg123 mpg123_handle * handle; int rc; // one command line parameter, being the MP3 filename assert( argc == 2 ); #ifdef OSS assert( ( outfile = open( "/dev/audio", O_WRONLY ) ) != -1 ); #else // ALSA // pipe file descriptors int piped[2]; assert( pipe( piped ) != -1 ); // fork into decoder (parent) and player (child) if ( fork() ) < // player (child) assert( dup2( piped[0], 0 ) != -1 ); // make pipe-in the new stdin assert( close( piped[1] ) == 0 ); // pipe-out, not needed assert( execlp( "aplay", "aplay", "-q", "-r44100", "-c2", "-fS16_LE", "-traw", NULL ) != -1 ); // should not return >else < // decoder (parent) close( piped[0] ); // pipe-in, not needed outfile = piped[1]; >#endif // initializing assert( mpg123_init() == MPG123_OK ); assert( atexit( mpg123_exit ) == 0 ); // setting up handle assert( ( handle = mpg123_new( NULL, NULL ) ) != NULL ); // clearing the format list, and setting the one preferred format assert( mpg123_format_none( handle ) == MPG123_OK ); assert( mpg123_format( handle, 44100, MPG123_STEREO, MPG123_ENC_SIGNED_16 ) == MPG123_OK ); // open input MP3 file assert( mpg123_open( handle, argv[1] ) == MPG123_OK ); // loop over input while ( rc != MPG123_DONE ) < rc = mpg123_read( handle, outmem, OUTSIZE, &outbytes ); assert( rc != MPG123_ERR && rc != MPG123_NEED_MORE ); assert( write( outfile, outmem, outbytes ) != -1 ); >// cleanup assert( close( outfile ) == 0 ); mpg123_delete( handle ); return EXIT_SUCCESS; > 

I hope this helps others with similar problems, as a template.

How to get Docker audio and input with Windows or, docker run -it —device /dev/snd:/dev/snd I’d also like to be able to use the Docker image on windows and mac hosts, but can’t find …

How to get Docker audio and input with Windows or mac host?

I’m trying to create a Docker image that works with a speaker and microphone.

I’ve got it working with Ubuntu as host using:

docker run -it --device /dev/snd:/dev/snd

I’d also like to be able to use the Docker image on windows and mac hosts, but can’t find the equivalent of /dev/snd to make use of the host’s speaker/microphone.

I was able to get playback on Windows using pulseaudio.exe.

1] Download pulseaudio for windows: https://www.freedesktop.org/wiki/Software/PulseAudio/Ports/Windows/Support/

2] Uncompress and change the config files.

2a] Add the following line to your $INSTALL_DIR/etc/pulse/default.pa:

load-module module-native-protocol-tcp listen=0.0.0.0 auth-anonymous=1 

This is an insecure setting: there are IP-based ones that are more secure but there’s some Docker sorcery involved in leveraging them I think. While the process is running anyone on your network will be able to push sound to this port; this risk will be acceptable for most users.

Читайте также:  Linux mint as router

2b] Change $INSTALL_DIR/etc/pulse//etc/pulse/daemon.conf line to read: exit-idle-time = -1

This will keep the daemon open after the last client disconnects.

3) Run pulseaudio.exe. You can run it as

to background it but its tricker to kill than just a simple execution.

4) In the container’s shell:

export PULSE_SERVER=tcp:127.0.0.1 

One of the articles I sourced this from (https://token2shell.com/howto/x410/enabling-sound-in-wsl-ubuntu-let-it-sing/) suggests recording may be blocked in Windows 10.

Web audio codec guide — Web media technologies, The Apple Lossless Audio Codec ( ALAC or Apple Lossless) is a lossless codec developed by Apple. After initially being a closed format, Apple opened it up …

Источник

What is dev dsp in linux

NAME

dsp - Open Sound System audio devices

DESCRIPTION

/dev/dsp is the default audio device in the system. It's connected to the main speakers and the primary recording source (such as microphone). The system administrator can set /dev/dsp to be a symbolic link to the desired default device. The ossinfo utility can be used to list the available audio devices in the system. /dev/dsp_mmap, /dev/dsp_ac3, /dev/dsp_multich and /dev/dsp_spdifout are default audio devices for specific applications such as games or media (DVD) players.

DIRECT ACCESS AUDIO DEVICE FILES

OSS 4.0 (and later) will create audio devices under /dev/oss/ directory. For example /dev/oss/sblive0/pcm0 is the first audio device that belongs to the first Sound Blaster Live! or Audigy card in the system. These direct devices are used when an application needs to access specific audio device (instead of the default one). You can use the ossinfo(1) utility with the -a option to get a list of the available audio devices in the system.

LEGACY AUDIO DEVICE FILES

Traditionally OSS has created device files like /dev/dsp0 to /dev/dspN for each audio device in the system. OSS 4.0 still supports this legacy naming. These files are symbolic links to the actual device files located under /dev/oss. The ossdevlinks(8) utility is used to manage these links and it will be automatically invoked when OSS is started.

COMPATIBILITY

• The /dev/dsp (default() audio device file will be supported by all OSS implementations and versions. • The special purpose audio default devices (such as /dev/dsp_mmap) are only supported by OSS 4.0 and later. • The legacy audio device files (such as /dev/dsp1) are supported by all OSS versions and implementations. • New style audio device files (under /dev/oss) are only supported by OSS 4.0 and later. However some independent OSS implementations may only support the legacy naming even they are otherwise OSS 4.0 compatible. • /dev/dsp0 doesn't exist in all Linux systems which use /dev/dsp for the same purpose. In such systems /dev/dsp points to the first audio device and /dev/dsp1 to the second.

PROGRAMMING INFORMATION

OPTIONS

FILES

o /dev/dsp Default audio device o /dev/dsp_mmap Default audio device for applications using mmap(2) o /dev/dsp_ac3 Default audio device for applications sending Dolby Digital (AC3) audio to an external receiver. o /dev/dsp_multich Default multichannel (4.0-7.1) audio output device o /dev/dsp_spdifout Default digital audio (S/PDIF) output device o /dev/oss//pcmN Direct access device files for individual audio devices. o /dev/dsp0 to /dev/dspN Legacy style direct access audio device files.

AUTHOR

© 2019 Canonical Ltd. Ubuntu and Canonical are registered trademarks of Canonical Ltd.

Источник

Оцените статью
Adblock
detector