FPGA Implementation of 3D Spatial Audio - ScholarWorks

45
CALIFORNIA STATE UNIVERSITY, NORTHRIDGE FPGA Implementation of 3D Spatial Audio A graduate project submitted in partial fulfillment of the requirements For the degree of Master of Science in Computer Engineering By Madhumitra Santhanakrishnan August 2020

Transcript of FPGA Implementation of 3D Spatial Audio - ScholarWorks

CALIFORNIA STATE UNIVERSITY, NORTHRIDGE

FPGA Implementation of 3D Spatial Audio

A graduate project submitted in partial fulfillment of the requirements

For the degree of Master of Science in Computer Engineering

By

Madhumitra Santhanakrishnan

August 2020

ii

The graduate project of Madhumitra Santhanakrishnan is approved:

_____________________________________________ ________

Dr. Xiaojun (Ashley) Geng, PhD, Date

_____________________________________________ ________

Prof. Benjamin Mallard, M.S.E.E., Date

_____________________________________________ ________

Dr. Shahnam Mirzaei, PhD, Chair Date

California State University, Northridge

iii

ACKNOWLEDGEMENTS

It is of immense pleasure for me to have come this far in my academic journey. The Department of Electrical

and Computer Engineering at California State University, Northridge has brought to me never-ending

knowledge in such a challenging and ever-advancing field. I would like to thank every professor and faculty

member who have helped me create the reality that I can thrive in. I would like to thank them for helping

me create the strong foundation to my career as a Computer Engineering. First and foremost, I would like

to express my sincere appreciation and gratitude to Dr. Shahnam Mirzaei, the Committee Chair. The

completion of this study could not have been possible without his knowledge and advice. It was through

Professor Mirzaei’s guidance that I had stepped into and gained immense interest in FPGA and System on

Chip technology. I am inspired by his attitude and knowledge when stepping into various projects and

research regarding Computer Engineering.

I would further like to thank Professor Benjamin Mallard for helping me come to this position as well.

Working with Professor Mallard has taught me a work ethic and challenge-embracing attitude I will take

forward with me in this journey. His knowledge in Electrical and Electronics Engineering is immense and

an inspiration for me to dive further into the technical aspect of my field. I would also like to thank Dr.

Xiaojun (Ashley) Geng for helping me pursue this opportunity. Her guidance during the beginning of my

Masters program lead me to pursue the correct subjects which lead me to the completion of this project.

Lastly, I would like to thank my family and close friends for encouraging me during the most challenging

moments in this journey.

iv

TABLE OF CONTENTS

ACKNOWLEDGEMENTS ..................................................................................................................... iii

LIST OF TABLES .................................................................................................................................... v

ABSTRACT ............................................................................................................................................. vi

CHAPTER 1: INTRODUCTION ............................................................................................................. 1

CHAPTER 2: LITERATURE REVIEW .................................................................................................. 2

CHAPTER 3 : FPGA IMPLEMENTATION ........................................................................................... 4

OBJECTIVE ......................................................................................................................................... 4

SUMMARY .......................................................................................................................................... 4

BLOCK DESIGN ................................................................................................................................. 5

COMPONENT EXPLANATION ......................................................................................................... 6

CODE REVIEW ................................................................................................................................. 11

CHAPTER 4: RESULT .......................................................................................................................... 14

TESTING PROCEDURE ................................................................................................................... 14

CHAPTER 5: CONCLUSION................................................................................................................ 14

WORK CITED........................................................................................................................................ 16

APPENDIX ............................................................................................................................................. 17

v

LIST OF TABLES

Figure 1 - Azimuth and Elevation ................................................................................................................. 2

Figure 2 - General Data Flow ....................................................................................................................... 5

Figure 3 - Block Design ................................................................................................................................ 5

Figure 4 - AXI DMA - Direct Memory Axi ................................................................................................. 7

Figure 5 - AXI DMA .................................................................................................................................... 8

Figure 6 - AXI I2S audio interface ............................................................................................................... 9

Figure 7 - I2S audio interface ....................................................................................................................... 9

Figure 8 - FIR filter ..................................................................................................................................... 10

Figure 9 - Frequency Response of Filter ..................................................................................................... 10

Figure 10 - Main functional code ................................................................................................................ 12

Figure 11 - Result........................................................................................................................................ 14

vi

ABSTRACT

FPGA Implementation of 3D Spatial Audio

By

Madhumitra Santhanakrishnan

Master of Science in Computer Engineering

The following contains the procedure and methodology to create a 3D spatial audio sound using Field

Programmable Gate Arrays (FPGA) available on the Zybo Z7 System on Chip board. Audio spatialization

is the digital signal processing (DSP) of sound to offer the listener the impression that the sound source is

within a three-dimensional environment. This can be implemented through FPGA due to its efficient DSP

slices and heavy configurability in every FPGA board. The first step is to understand how audio

spatialization can be modelled. The next step is to set up the Zybo board’s audio codec Inter-IC Sound (I2S)

interface to convert audio through the input and output of the Zybo board. Finally, the project will design a

special digital filter to create a specific sound localized effect on the output audio. This report will proceed

to go through these steps in detail.

CHAPTER 1: INTRODUCTION

Spatial Audio is used in various applications to enhance the quality of sound produced. Some examples

include concert halls, movie theaters, virtual reality simulations, digital sound producing devices (ex.

headphones and speakers), hearing aids and many more. It can be seen in the movie industry, music industry

and virtual reality and augment reality industries. Spatial Audio can be defined as the audio processing

technology that can be used to reproduce the spaciousness of sound either in a live concert acoustical format

or through digital signal processing. In this project, the focus will be on producing Spatial Audio through

digital signal processing techniques. There are three main parameters included in Spatial audio: the listener,

the source of sound, and the spatial points representing where the sound should appear to be coming from.

The sound source for this project will be regular earphones connected to the audio codec portion of the

Zybo Board.

FPGA technology is an efficient method in implementing this design. One of the biggest features of FPGA

is it allows re-programmability and for heavy digital signal processing. Re-programmability is useful in our

situation since we will be testing specific spatial audio sounds repeatedly. As mentioned, FPGA has DSP

slices to implement signal processing functions. The DSP slice has Multiply-Accumulate (MAC)

operations. This can help us produce the required digital signal processing operations needed to convert

and transform the incoming audio source. This project uses the FPGA on the System on Chip Zybo Zynq-

7020 Board. It will be programmed through Vivado 2018.2 version. It will then include embedded software

which is done on the Xilinx Software Development Kit (SDK). An existing MATLAB model was initially

used to test how a spatialized sound may appear, but the focus of this report is in the FPGA model.

The overall process flow will be to convert the incoming sound into the digital plane, then process it through

a specific digital filter (will be further explained) and convert back into analog format to listen to through

the headphones.

_________________

2

CHAPTER 2: LITERATURE REVIEW

The goal of this literary survey is to help design a system using digital signal processing that produces the

desired sound spatialized affect. As previously mention audio spatialization is considered as the process of

creating a sound or audio output that appears to the listener as localized in a 3-dimensional point of view.

A study done by Ville Pulkki as part of the Laboratory of Acoustics and Audio Signal Processing claims,

“The acoustical sound field around us is very complex. Direct sounds, reflections, and refractions arrive at

the listener’s ears, who then analyzes incoming sounds and connects them mentally to sound sources,”

(Pulkki, 1997, p.456). Pulkki goes into detail, explaining and mathematically deriving possible methods in

which this effect can be created through various sound speaker systems. The final method the author derives

is called Virtual Based Amplitude Panning. While this method is useful for a system involved with multiple

speakers, it may not be suitable or efficient for this project which uses headphones. However, this article

provides an articulate discussion on the derivation and the various possibilities to start from for making the

spatial audio effect. Pulkki’s main mathematical model involves changing the amplitude and delay or time

difference between different sound sources. The author claims this is the key in producing our desired result.

Before we can further dive into this model quantizing the spatial or localization aspect is required. To

quantify the localization of the audio we can consider our head as a center point and use arbitrary vertical

and horizontal planes to measure different placements. These planes are represented as an azimuth and

elevation (both in degrees). Azimuth is the lateral change in position, where the center of the face is 0

degrees. Elevation is the vertical change in position with the center of the face is 0 degrees. Pulkki considers

radius as another aspect, however since we are using headphones, radius (distance from listener’s head)

will not need to be considered for this project. Fig.1 below shows a better visual representation of the spatial

points. The following diagram comes from a Sound Spatialization study completed at Cornell University.

Figure 1 - Azimuth and Elevation

Coming back to Pulkki’s concept of varying amplitude and delay, the author introduces to us the concepts

of ITD or Interaural Time Difference model and IID or Inter-aural difference. Another paper written by

Corey I.Cheng and Gregory H.Wakefield discusses this topic heavily and later on introduces us to a possible

technology we can use. Diving more into this paper can help us formulate the model.

One of the most effective spatial models is the ITD and IID model. Cheng and Wakefield call this model

the Duplex theory and claim these are “important parameters for the perception of a sound’s location in the

[horizontal plane]” (Cheng and Wakefield, pg.1). The ITD model creates a lateral sound displacement by

playing one sound to one ear at a slightly delayed time with respect to the other ear. As long the delay does

3

not reach a threshold value which causes an obvious lag in sound, this method can create the effect of a

sound localized to a certain point. Take for an example we want the sound to appear from the azimuth angle

of 135 degrees. In this case the sound will reach left ear first then to the right ear at a slight delay, allowing

the listener to feel the sound is coming roughly from the 135 degrees azimuth. ITD can therefore be used

as a principle for lateral displacement sound effects. The publication mentions however mentions that for

sound frequencies above 1500 Hz, ITDs can have aliasing errors because the time difference can be greater

than one period. It is here that IID or Inter-aural differences are involved. IID is the concept of adjusting

the difference in amplitude between the two ears to correct the spatial localization. ITD and IID only

corresponds to changes in sound on the perceived azimuth or horizontal axis. It is the Pinna Model that is

used to help find the correct audio spatialization with respect to elevation. Without going into much detail,

a Pinna model is based off characteristics formed from the spherical head, shoulder reflection, torso

diffraction models and ear canal entrance. It is a model that includes components such as reflection, lag,

and resonance.

It must be kept in mind that spatial audio is a phenomenon that is relative to each listener’s ear. When

considering an overall model, we can imagine spatial audio processing as a system where the sound source

is the input, it is passed through specific processing, and the output is the spatialized audio. The filter

designed therefore has to account for the variety of different ways a listener’s ears can perceive sound. This

perception can be affected by the listeners torso, neck and head placement.

There are various methods to create a spatial audio sound that models the concepts discussed but this project

focuses on one specifically, which is through the use of Head Related Transfer Functions (HRTF). Head

related transfer function characterizes how an ear receives sounds from certain points in spaces. Cheng and

Wakefield claim, “[HRTF] is defined to be a specific individual’s left or right ear far-field frequency

response, as measured from a specific point in the free field to a specific point in the ear canal,” (Cheng

and Wakefield, p.2 ).

To use this HRTF data we need to design a Finite Impulse Response (FIR) filter. The coefficients to the

FIR filter will be the Head Related Impulse Responses (HRIR). HRIR are measured experimentally by

using a *KEMAR head and changing the physical sound sources at known azimuths and elevations, then

mathematically processing the results. The mathematical model is complex but this project focuses on using

this model. HRTF values do vary from person to person. Fortunately, University of California (UC) Davis

offers a measurements for HRTF for over 90 subjects. This is where the data can be extracted for this

project.

The study of HRTF values from UC Davis fortunately for us is based off the ITD/IID model and the Pinna

model. The following contains the MATLAB code for accessing the database. For this project, I have

selected an azimuth of 10 degrees and elevation of 10 degrees. The database can be downloaded online as

a part of UC Davis Electrical and Computer Engineering department site. These values correspond to 200

coefficients that will later be normalized as part of the COE file included in the appendix.

cd standard_hrir_database;

cd subject_003;

load(‘hrir_final.mat’);

hl = squeeze(hrir_l(10,10,:));

hr = squeeze(hrir_r(10,10,:));

*KEMAR (Knowles Electronics Manikin for Acoustic Research) is a head and torso simulator that has the similar acoustical

characteristics of an average human. It used for acoustical measurements.

4

CHAPTER 3 : FPGA IMPLEMENTATION

OBJECTIVE

1. To design a system that has an audio input and creates an output that is a localized to a specific

degree in azimuth and elevation. In this case it is 10 degrees in both azimuth and elevation. These

are the sub-objectives to creating this model.

a. Configure the audio codec to allow streaming in and out of the audio data

b. Add the FIR filter with the HRIR coefficients to specifically create the desired output.

SUMMARY

The board we are using is the Zybo Z7020 SoC board. It includes an audio codec which is implemented

through an I2S connection. The challenge in this project is in making the audio interface work on the Zybo

board before adding in the FIR filter. Digilent has given us a DMA Audio example to base our design off

of. The audio input is connected to mic in. It is then recorded into the memory after which it is read from

the memory and processed through the i2s connection again. The output should result in a spatialized audio.

The next section will proceed to understand this, allowing us to connect the filter in the correct way. The

full image of the Zybo board with the labelled parts is included in the appendix.

• Audio input - Connected from the laptop to MIC IN on the board through an Auxiliary Audio Cable

3.5 MM.

• Audio output – Connected from HPH OUT through regular 3.5 MM headphones.

INPUT OUTPUT

BTN0 No effect

BTN1 Record from mic in

BTN2 Play on hph out

BTN3 Record from line in

When BTN2 is pressed the system records the audio from an input device for 5 seconds then writes it on to

the EEPROM memory. Upon the press of the button an Interrupt Service Routine (ISR) is initiated to start

the recording. Once recorded onto the memory we can then read from the memory. It is here we can have

the data stream run through the FIR filter with the HRIR coefficients. The output is then delivered to hph

out. Please refer to Fig. 2 below.

5

Figure 2 - General Data Flow

BLOCK DESIGN

Please refer to Fig. 3 or the appendix for a clearer view of the block design.

Figure 3 - Block Design

6

COMPONENT EXPLANATION

The main components of the processing system, DMA (Direct Memory Access), I2S Audio interface, FIR

filter, GPIO for the buttons, and a memory interconnect. The system audio interface is done by the Audio

interface IP. This interface sends in the audio data that is filtered through the HRTF FIR filter. It is then to

be recorded into the DMA to be stored. Then when playback is required the DMA sends out the data through

the interface which presents the output. The GPIO component establishes the interface for the buttons.

Many of these components are AXI (Advanced eXtensible Interface) compliant.

After the block diagram is designed we can synthesize the design and import it to the Xilinx Software

Development Kit. Here we can program it with the higher language, C. The following components are part

of the IP library owned by Xilinx. All of this is interconnected through the Zynq processing system. The

following explains the use of each block in implementing the overall design.

AXI IIC

This component is mainly used as a transaction interface to connect to the AXI-4 Lite Interface. According

to the Xilinx documentation this component is helpful to interface IIC compliant devices. It is used to help

handle the interrupts. Contains I2C initialization functions and an asynchronous reading from the EEPROM

(electrical erasable programmable read-only memory) on the FPGA.

AXI GPIO (AXI GENERAL PURPOSE INPUT OUTPUT)

This connection is to connect to the 4 buttons that allow us to control the audio signal. It is later programmed

to check for an interrupt once a button is pressed. The main function that records the demo controls a part

of this GPIO. It will initialize the interrupt controller, the IIC, controller, USER I/O driver, DMA engine

and Audio controller upon the click of the respected button. This will be explained in the Code review

section. In using this component for the code, we are first enabling the GPIO channel interrupts so that the

push buttons can get detected. Then we check whether interrupt is of interest, if so, we enable the interrupt

routine.

AXI MEM INTERCONNECT

The AXI Memory interconnect connects different AXI memory-mapped master devices to one or more

memory-mapped slave devices. Our AXI interconnect has an input from the main Zynq PS (processing

system) to the AXI DMA component.

7

AXI DMA

Figure 4 - AXI DMA - Direct Memory Axi

The AXI DMA or AXI – Direct Memory Access component is key in being able to record our incoming

audio samples. This provides high bandwidth direct memory access between memory and AXI4-Stream

target peripherals. Initially the possibility of connecting this to the FIR filter was explored. Later we can

see this method is not the most efficient. Below contains the Block diagram which can help understand the

connections to this component.

8

Figure 5 - AXI DMA

The AXI4 Stream Master (MM2S) is the data-path between the system memory and the streaming target.

The streaming target is the I2S audio component in our case. The audio coming from this must go through

the FIR filter and then get attached to the AXI DMA. The output of the FIR filter is therefore connected to

the slave port on the audio component. Data transfer can also occur on the AXI stream (S2MM) channel.

The audio master is connected to this channel for recording the input audio.

9

I2S AUDIO INTERFACE

Figure 6 - AXI I2S audio interface

The I2S audio interface allows us to communicate with the audio codec on the FPGA board. The Audio

codec used on the board is an analog device, SSM2603. Understanding its configuration can help us

understand how we can add changes to audio output. The digital side of the SSM2603 is connected to the

programmable logic side of the Zynq board. The audio transaction is done through the I2S protocol which

is already a part of Xilinx’s IP. The following table shows the protocol connections.

Figure 7 - I2S audio interface

These connections are automatically made easy due to the I2S audio IP component. We still have a few

more connections apart from this, which are used to send in the audio that needs to be played and the audio

that is to be recorded. In the diagram we see the AXI_S2MM is connected to the FIR filter then to the DMA

IP for sending in data that is recorded. The SDATA_I is the connection to the external input for collecting

the audio that is to be recorded. AXI_MM2S as discussed is coming from the DMA for playback purposes.

AXI_S2MM connects to the FIR filter. The output of which connects to the DMA AXI_S2MM.

10

FIR FILTER

Figure 8 - FIR filter

Figure 9 - Frequency Response of Filter

Earlier the HRIR coefficients were extracted from the UC Davis database. The data in this was normalized,

converted into a HEX format and place in a COE file. Using Xilinx’s FIR Compiler IP we are able to

directly use this COE file and create an FIR filter to fit our need. The output and input data has been adjust

to meet the streaming bit size. The output from the DMA is 32 bits in length to accommodate for this, the

coefficients were simply put into a 32 bit format.

The input to FIR compiler is connected directly from the I2S interface module. This way the data stored is

directly processed into the DMA component. Our FIR filter is comprised of 200 coefficients, which comes

from our HRTF database. It is much more preferred to use the Xilinx IP component instead of hard coding

the filter using VHDL or Verilog. In using the Xilinx IP for the FIR filter it is important the data bus length

are matching the length of the incoming data from the AXI_S2MM portI2S module. The data bus length is

4 bytes in our case. This can be changed through FIR filter configuration. This creates a more reliable

design with less errors.

11

CODE REVIEW

The main areas that require embedded software program involve the Interrupt handling, audio interfacing,

audio demo, DMA access, and user input/output configuration. This programming is conducted through the

Xilinx Software Development Kit (SDK). The SDK also offers us a terminal connection to the board

allowing us use print statements to view to result. The below Flow chart explains the entire flow of the

system.

PROCESS FLOW DIAGRAM

The following operations are programmed into the system. The final DEMO function is what controls the

rest of the components allowing the complete system to function.

12

DEMO FUNCTION

Figure 10 - Main functional code

1. Initialize the interrupt controller, IIC controller, User I/O driver, DMA, Audio I2S

2. Print Menu options (Record from MIC IN, Play on HPH OUT)

3. Check if a Button has been pressed

a. If the button pressed is BTN1 – Record from mic one

b. Set up the MIC input

c. Start Recording

4. During Recording, the DMA S2MM flag is enabled. An interrupt enables the recording operation

to save the audio into the memory

5. Check if the DMA S2MM flag has been Reset

a. If it has been reset we can proceed onwards

b. Else if any other button is pressed during this routine, the terminal output will print “Still

Recording…”

6. The system will now print that the Recording Function is done

7. Here the User may click the play back button

8. If the Play back button is clicked, A function to set up the HPH output and a function for Audio

Playback is enabled

9. Audio Playback initiates the interrupt routine to stream data through the MM2S pathway.

13

10. The DMA MM2S flag will reset once the playback is complete. A confirmation message will be

printed to the screen.

The FIR filter is declared as a component as part of the initial Block design. It need not be programmed

through the Software Development Kit.

_______________

14

CHAPTER 4: RESULT

The following shows the result of the DMA Audio system.

TESTING PROCEDURE

1. A simple 80 BPM Drum Track is play from the laptop and is inputted into HPH out through an

AUX cable.

2. BTN1 is pressed to record the audio track. The terminal output will print “Start Recording”. If any

other button press during this stage, the terminal will print “Still Recording”.

3. Once 5 seconds have been recorded the Terminal output will print that the recording is done.

4. BTN2 is then pressed to play back the tracks. The audio can be heard playing back through the hph

output headphones.

5. The system will confirm the play back is done.

Figure 11 - Result

CHAPTER 5: CONCLUSION

15

This project implements 3D Audio Spatialization through the FPGA on the Zybo 7020 SoC board. The use

of HRTF filters may be the most efficient techniques to produce simple 3D spatialization with limited audio

output resources. Increasing the number of sound sources increases the number of parameters needed. For

example, when using multiple sound speakers as the sound source, the position of the speakers and the

value of ITD and IID now varies. However, HRTF will still remain the basis to these further

implementations and calculations. To conclude, HRTF filter model can lead to more complex models for

different types of sound sources and usages.

Completing this project involved much trial and error. This design will also vary depending on the board.

The proposed FIR filter design is a viable design to create a form of audio spatialization. It is not a good

design for a system that involves more than 2 sound sources. There is quite a lot of debugging involved in

the process, from the hardware design to the high-level C programming.

There is a plethora of potential future work and research in Spatial Audio. It comes with various challenges,

one of the biggest being the ability to perform a guaranteed test on whether an audio sound is being

perceived at the desired location by the listener. This parameter is malleable from person to person, only

further trial and error can relieve this issue.

This project offers us a starting point into how to use an integrated circuit to create an audio interface and

perform digital signal processing on the audio itself. It is of value to make heavy processing as efficient as

possible within a constrained system. The project, therefore, proves why FPGA can be a technology of

great value for DSP of audio data. While using a CPU may make this model easier to implement, it loses

out on qualities such as efficiency, cost, and configurability. Qualities the FPGA offers to us. Overall this

method offers us great scope in the future in designing more configurable systems with respect to audio

signal processing.

_______________

16

WORK CITED

Barreto, Armando, and Navarun Gupta. 2003, pp. 1–6, Dynamic Modeling of the Pinna for Audio

Spatialization.

Algazi, V.R., et al. IEEE, 2001, pp. 1–4, The CIPIC HRTF Database.

Cheng, Corey I, and Gregory H Wakefield. pp. 1–28, INTRODUCTION TO HEAD-RELATED

TRANSFER FUNCTIONS (HRTF’S): REPRESENTATIONS OF HRTF’S IN TIME, FREQUENCY,

AND SPACE.

“HRTF-Based Systems.” The CIPIC Interface Laboratory Home Page,

www.ece.ucdavis.edu/cipic/spatial-

sound/tutorial/hrtfsys/#:~:text=ITD%20Model%20One%20of%20the%20simplest%20effective%2

0HRTF,assumed%20to%20be%20diagonally%20opposite%20across%20the%20head.

“HRTF.” 3-D Head-Related Transfer Function (HRTF) Interpolation - MATLAB,

www.mathworks.com/help/audio/ref/interpolatehrtf.html.

Kim, Donn, and Antonio Dorset. Sound Spatialization Using an FPGA.

17

APPENDIX

ZYBO Z7020 SoC DEVELOPMENT BOARD

Callout Component Description Callout Component Description

1 Power Switch 15 Processor Reset Pushbutton

2 Power Select Jumper and battery header 16 Logic configuration reset Pushbutton

3 Shared UART/JTAG USB port 17 Audio Codec Connectors

18

4 MIO LED 18 Logic Configuration Done LED

5 MIO Pushbuttons (2) 19 Board Power Good LED

6 MIO Pmod 20 JTAG Port for optional external cable

7 USB OTG Connectors 21 Programming Mode Jumper

8 Logic LEDs (4) 22 Independent JTAG Mode Enable Jumper

9 Logic Slide switches (4) 23 PLL Bypass Jumper

10 USB OTG Host/Device Select Jumpers 24 VGA connector

11 Standard Pmod 25 microSD connector (Reverse side)

12 High-speed Pmods (3) 26 HDMI Sink/Source Connector

13 Logic Pushbuttons (4) 27 Ethernet RJ45 Connector

14 XADC Pmod 28 Power Jack

19

20

AUDIO.C

/******************************************************************************

* @file audio.c

* Audio driver. *

* @authors RoHegbeC

* * @date 2014-Oct-30

*

* @copyright * (c) 2015 Copyright Digilent Incorporated

* All Rights Reserved

* * This program is free software; distributed under the terms of BSD 3-clause

* license ("Revised BSD License", "New BSD License", or "Modified BSD License")

* * Redistribution and use in source and binary forms, with or without modification,

* are permitted provided that the following conditions are met:

* * 1. Redistributions of source code must retain the above copyright notice, this

* list of conditions and the following disclaimer.

* 2. Redistributions in binary form must reproduce the above copyright notice, * this list of conditions and the following disclaimer in the documentation

* and/or other materials provided with the distribution.

* 3. Neither the name(s) of the above-listed copyright holder(s) nor the names * of its contributors may be used to endorse or promote products derived

* from this software without specific prior written permission.

* * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"

* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE

* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE

* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL

* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR * SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER

* CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,

* OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

* @desciption *

* This program was initially developed to be run from within the BRAM. It is * constructed to run in a polling mode, in which the program poles the Empty and

* Full signals of the two FIFO's which are implemented in the audio I2S VHDL core.

* In order to have a continuous and stable Sound both when recording and playing * the user must ensure that DDR cache is enabled. This is only mandatory when the

* program is loaded in to the DDR, if the program is stored in the BRAM then

* the cache is not mandatory. *

* <pre>

* MODIFICATION HISTORY: *

* Ver Who Date Changes

* ----- ------------ ----------- ----------------------------------------------- * 1.00 RoHegbeC 2014-Oct-30 First release

*

* </pre> *

*****************************************************************************/

#include "audio.h"

#include "../demo.h"

/************************** Variable Definitions *****************************/

extern volatile sDemo_t Demo;

21

/****************************************************************************** * Function to write one byte (8-bits) to one of the registers from the audio

* controller.

* * @param u8RegAddr is the LSB part of the register address (0x40xx).

* @param u8Data is the data byte to write.

* * @return XST_SUCCESS if all the bytes have been sent to Controller.

* XST_FAILURE otherwise.

*****************************************************************************/ XStatus fnAudioWriteToReg(u8 u8RegAddr, u16 u8Data) {

u8 u8TxData[2]; u8 u8BytesSent;

u8TxData[0] = u8RegAddr << 1; u8TxData[0] = u8TxData[0] | ((u8Data>>8) & 0b1);

u8TxData[1] = u8Data & 0xFF;

u8BytesSent = XIic_Send(XPAR_IIC_0_BASEADDR, IIC_SLAVE_ADDR, u8TxData, 2, XIIC_STOP);

//check if all the bytes where sent

if (u8BytesSent != 3)

{ //return XST_FAILURE;

}

return XST_SUCCESS;

}

/******************************************************************************

* Function to read one byte (8-bits) from the register space of audio controller. *

* @param u8RegAddr is the LSB part of the register address (0x40xx).

* @param u8RxData is the returned value *

* @return XST_SUCCESS if the desired number of bytes have been read from the controller

* XST_FAILURE otherwise *****************************************************************************/

XStatus fnAudioReadFromReg(u8 u8RegAddr, u8 *u8RxData) {

u8 u8TxData[2];

u8 u8BytesSent, u8BytesReceived;

u8TxData[0] = u8RegAddr;

u8TxData[1] = IIC_SLAVE_ADDR;

u8BytesSent = XIic_Send(XPAR_IIC_0_BASEADDR, IIC_SLAVE_ADDR, u8TxData, 2, XIIC_STOP);

//check if all the bytes where sent

if (u8BytesSent != 2) {

return XST_FAILURE;

}

u8BytesReceived = XIic_Recv(XPAR_IIC_0_BASEADDR, IIC_SLAVE_ADDR, u8RxData, 1, XIIC_STOP);

//check if there are missing bytes if (u8BytesReceived != 1)

{

return XST_FAILURE; }

return XST_SUCCESS; }

XStatus fnAudioStartupConfig ()

{

union ubitField uConfigurationVariable;

22

int Status;

// Configure the I2S controller for generating a valid sampling rate

uConfigurationVariable.l = Xil_In32(I2S_CLOCK_CONTROL_REG); uConfigurationVariable.bit.u32bit0 = 1;

uConfigurationVariable.bit.u32bit1 = 0;

uConfigurationVariable.bit.u32bit2 = 1; Xil_Out32(I2S_CLOCK_CONTROL_REG, uConfigurationVariable.l);

uConfigurationVariable.l = 0x00000000;

//STOP_TRANSACTION

uConfigurationVariable.bit.u32bit1 = 1; Xil_Out32(I2S_TRANSFER_CONTROL_REG, uConfigurationVariable.l);

//STOP_TRANSACTION uConfigurationVariable.bit.u32bit1 = 0;

Xil_Out32(I2S_TRANSFER_CONTROL_REG, uConfigurationVariable.l);

//slave: I2S

Status = fnAudioWriteToReg(R15_SOFTWARE_RESET, 0b000000000);

Status = XST_SUCCESS; if (Status == XST_FAILURE)

{

if (Demo.u8Verbose) {

xil_printf("\r\nError: could not write R15_SOFTWARE_RESET (0x00)"); }

return XST_FAILURE;

} usleep(1000);

Status = fnAudioWriteToReg(R6_POWER_MGMT, 0b000110000);

if (Status == XST_FAILURE) {

if (Demo.u8Verbose)

{ xil_printf("\r\nError: could not write R6_POWER_MGMT (0b000110000)");

}

return XST_FAILURE; }

Status = fnAudioWriteToReg(R0_LEFT_ADC_VOL, 0b000010111);

if (Status == XST_FAILURE) {

if (Demo.u8Verbose)

{ xil_printf("\r\nError: could not write R0_LEFT_ADC_VOL (0b000010111)");

}

return XST_FAILURE; }

Status = fnAudioWriteToReg(R1_RIGHT_ADC_VOL, 0b000010111);

if (Status == XST_FAILURE) {

if (Demo.u8Verbose)

{ xil_printf("\r\nError: could not write R0_LEFT_ADC_VOL (0b000010111)");

}

return XST_FAILURE; }

Status = fnAudioWriteToReg(R2_LEFT_DAC_VOL, 0b101111001);

if (Status == XST_FAILURE) {

if (Demo.u8Verbose)

{ xil_printf("\r\nError: could not write R0_LEFT_ADC_VOL (0b000010111)");

}

return XST_FAILURE; }

Status = fnAudioWriteToReg(R3_RIGHT_DAC_VOL, 0b101111001);

if (Status == XST_FAILURE) {

23

if (Demo.u8Verbose) {

xil_printf("\r\nError: could not write R0_LEFT_ADC_VOL (0b000010111)");

} return XST_FAILURE;

}

Status = fnAudioWriteToReg(R4_ANALOG_PATH, 0b000000000); if (Status == XST_FAILURE)

{

if (Demo.u8Verbose) {

xil_printf("\r\nError: could not write R0_LEFT_ADC_VOL (0b000010111)");

} return XST_FAILURE;

}

fnAudioWriteToReg(R5_DIGITAL_PATH, 0b000000000); fnAudioWriteToReg(R7_DIGITAL_IF, 0b000001010);

fnAudioWriteToReg(R8_SAMPLE_RATE, 0b000000000);

usleep(1000);

fnAudioWriteToReg(R9_ACTIVE, 0b000000001);

fnAudioWriteToReg(R6_POWER_MGMT, 0b000100000);

return XST_SUCCESS;

}

/****************************************************************************** * Initialize PLL and Audio controller over the I2C bus

*

* @param none *

* @return none.

*****************************************************************************/ XStatus fnInitAudio()

{

int Status;

//Set the PLL and wait for Lock

//Status = fnAudioPllConfig(); // if (Status != XST_SUCCESS)

// {

// if (Demo.u8Verbose) // {

// xil_printf("\r\nError: Could not lock PLL");

// } // }

//Configure the ADAU registers Status = fnAudioStartupConfig();

if (Status != XST_SUCCESS)

{ if (Demo.u8Verbose)

{

xil_printf("\r\nError: Failed I2C Configuration"); }

}

Demo.fAudioPlayback = 0;

Demo.fAudioRecord = 0;

return XST_SUCCESS;

}

/******************************************************************************

* Configure the the I2S controller to receive data, which will be stored locally

* in a vector. (Mem) *

* @param u32NrSamples is the number of samples to store.

* * @return none.

24

*****************************************************************************/ void fnAudioRecord(XAxiDma AxiDma, u32 u32NrSamples)

{

union ubitField uTransferVariable;

if (Demo.u8Verbose)

{ xil_printf("\r\nEnter Record function");

}

uTransferVariable.l = XAxiDma_SimpleTransfer(&AxiDma,(u32) MEM_BASE_ADDR, 5*u32NrSamples,

XAXIDMA_DEVICE_TO_DMA);

if (uTransferVariable.l != XST_SUCCESS) {

if (Demo.u8Verbose)

xil_printf("\n fail @ rec; ERROR: %d", uTransferVariable.l); }

// Send number of samples to recorde

Xil_Out32(I2S_PERIOD_COUNT_REG, u32NrSamples);

// Start i2s initialization sequence uTransferVariable.l = 0x00000000;

Xil_Out32(I2S_TRANSFER_CONTROL_REG, uTransferVariable.l);

uTransferVariable.bit.u32bit1 = 1; Xil_Out32(I2S_TRANSFER_CONTROL_REG, uTransferVariable.l);

// Enable Stream function to send data (S2MM)

Xil_Out32(I2S_STREAM_CONTROL_REG, 0x00000001);

if (Demo.u8Verbose)

{

xil_printf("\r\nRecording function done"); }

}

/******************************************************************************

* Configure the I2S controller to transmit data, which will be read out from

* the local memory vector (Mem) *

* @param u32NrSamples is the number of samples to store.

* * @return none.

*****************************************************************************/

void fnAudioPlay(XAxiDma AxiDma, u32 u32NrSamples) {

union ubitField uTransferVariable;

if (Demo.u8Verbose)

{

xil_printf("\r\nEnter Playback function"); }

// Send number of samples to record Xil_Out32(I2S_PERIOD_COUNT_REG, u32NrSamples);

// Start i2s initialization sequence

uTransferVariable.l = 0x00000000; Xil_Out32(I2S_TRANSFER_CONTROL_REG, uTransferVariable.l);

uTransferVariable.bit.u32bit0 = 1;

Xil_Out32(I2S_TRANSFER_CONTROL_REG, uTransferVariable.l);

uTransferVariable.l = XAxiDma_SimpleTransfer(&AxiDma,(u32) MEM_BASE_ADDR, 5*u32NrSamples, XAXIDMA_DMA_TO_DEVICE);

if (uTransferVariable.l != XST_SUCCESS)

{ if (Demo.u8Verbose)

xil_printf("\n fail @ play; ERROR: %d", uTransferVariable.l);

}

25

// Enable Stream function to send data (MM2S) Xil_Out32(I2S_STREAM_CONTROL_REG, 0x00000002);

if (Demo.u8Verbose)

{ xil_printf("\r\nPlayback function done");

}

}

/******************************************************************************

* Configure the input path to MIC and disables all other input paths. * For additional information pleas refer to the ADAU1761 datasheet

*

* @param none *

* @return none.

*****************************************************************************/ void fnSetMicInput()

{

//MX1AUXG = MUTE; MX2AUXG = MUTE; LDBOOST = 0dB; RDBOOST = 0dB

fnAudioWriteToReg(R4_ANALOG_PATH, 0b000010100);

if (Demo.u8Verbose)

{ xil_printf("\r\nInput set to MIC");

}

}

/****************************************************************************** * Configure the input path to Line and disables all other input paths

* For additional information pleas refer to the ADAU1761 datasheet

* * @param none

*

* @return none. *****************************************************************************/

void fnSetLineInput()

{ //MX1AUXG = 0dB; MX2AUXG = 0dB; LDBOOST = MUTE; RDBOOST = MUTE

fnAudioWriteToReg(R4_ANALOG_PATH, 0b000010010);

fnAudioWriteToReg(R5_DIGITAL_PATH, 0b000000000); if (Demo.u8Verbose)

{

xil_printf("\r\nInput set to LineIn"); }

}

/******************************************************************************

* Configure the output path to Line and disables all other output paths

* For additional information pleas refer to the ADAU1761 datasheet *

* @param none

* * @return none.

*****************************************************************************/

void fnSetLineOutput() {

//zybo does not have a line output

//MX3G1 = mute; MX3G2 = mute; MX4G1 = mute; MX4G2 = mute; //fnAudioWriteToReg(R4_ANALOG_PATH, 0x00);

if (Demo.u8Verbose) {

xil_printf("\r\nOutput set to LineOut");

} }

/****************************************************************************** * Configure the output path to Headphone and disables all other output paths

* For additional information pleas refer to the ADAU1761 datasheet

* * @param none

26

* * @return none.

*****************************************************************************/

void fnSetHpOutput() {

//MX5G3 = MUTE; MX5EN = MUTE; MX6G4 = MUTE; MX6EN = MUTE

fnAudioWriteToReg(R4_ANALOG_PATH, 0b000010110); fnAudioWriteToReg(R5_DIGITAL_PATH, 0b000000000);

if (Demo.u8Verbose)

{ xil_printf("\r\nOutput set to HeadPhones");

} }

DMA.C

/*

* dma.c

* * Created on: Jan 20, 2015

* Author: ROHegbeC

*/

#include "dma.h"

#include "../demo.h"

/************************** Variable Definitions *****************************/

extern volatile sDemo_t Demo;

extern XAxiDma_Config *pCfgPtr;

/******************************************************************************

* This is the Interrupt Handler from the Stream to the MemoryMap. It is called

* when an interrupt is trigger by the DMA *

* @param Callback is a pointer to S2MM channel of the DMA engine.

* * @return none

*

*****************************************************************************/ void fnS2MMInterruptHandler (void *Callback)

{

u32 IrqStatus; int TimeOut;

XAxiDma *AxiDmaInst = (XAxiDma *)Callback;

//Read all the pending DMA interrupts IrqStatus = XAxiDma_IntrGetIrq(AxiDmaInst, XAXIDMA_DEVICE_TO_DMA);

//Acknowledge pending interrupts XAxiDma_IntrAckIrq(AxiDmaInst, IrqStatus, XAXIDMA_DEVICE_TO_DMA);

//If there are no interrupts we exit the Handler if (!(IrqStatus & XAXIDMA_IRQ_ALL_MASK))

{

return; }

// If error interrupt is asserted, raise error flag, reset the // hardware to recover from the error, and return with no further

// processing. if (IrqStatus & XAXIDMA_IRQ_ERROR_MASK)

{

Demo.fDmaError = 1; XAxiDma_Reset(AxiDmaInst);

TimeOut = 1000;

while (TimeOut) {

if(XAxiDma_ResetIsDone(AxiDmaInst))

27

{ break;

}

TimeOut -= 1; }

return;

}

if ((IrqStatus & XAXIDMA_IRQ_IOC_MASK))

{ Demo.fDmaS2MMEvent = 1;

}

}

/******************************************************************************

* This is the Interrupt Handler from the MemoryMap to the Stream. It is called * when an interrupt is trigger by the DMA

*

* @param Callback is a pointer to MM2S channel of the DMA engine.

*

* @return none

* *****************************************************************************/

void fnMM2SInterruptHandler (void *Callback)

{

u32 IrqStatus; int TimeOut;

XAxiDma *AxiDmaInst = (XAxiDma *)Callback;

//Read all the pending DMA interrupts

IrqStatus = XAxiDma_IntrGetIrq(AxiDmaInst, XAXIDMA_DMA_TO_DEVICE);

//Acknowledge pending interrupts XAxiDma_IntrAckIrq(AxiDmaInst, IrqStatus, XAXIDMA_DMA_TO_DEVICE);

//If there are no interrupts we exit the Handler

if (!(IrqStatus & XAXIDMA_IRQ_ALL_MASK)) {

return;

}

// If error interrupt is asserted, raise error flag, reset the

// hardware to recover from the error, and return with no further // processing.

if (IrqStatus & XAXIDMA_IRQ_ERROR_MASK){

Demo.fDmaError = 1; XAxiDma_Reset(AxiDmaInst);

TimeOut = 1000;

while (TimeOut) {

if(XAxiDma_ResetIsDone(AxiDmaInst))

{ break;

}

TimeOut -= 1; }

return;

} if ((IrqStatus & XAXIDMA_IRQ_IOC_MASK))

{

Demo.fDmaMM2SEvent = 1; }

}

/******************************************************************************

* Function to configure the DMA in Interrupt mode, this implies that the scatter

* gather function is disabled. Prior to calling this function, the user must * make sure that the Interrupts and the Interrupt Handlers have been configured

*

* @return XST_SUCCESS - if configuration was successful * XST_FAILURE - when the specification are not met

28

*****************************************************************************/ XStatus fnConfigDma(XAxiDma *AxiDma)

{

int Status; XAxiDma_Config *pCfgPtr;

//Make sure the DMA hardware is present in the project //Ensures that the DMA hardware has been loaded

pCfgPtr = XAxiDma_LookupConfig(XPAR_AXIDMA_0_DEVICE_ID);

if (!pCfgPtr) {

if (Demo.u8Verbose)

{ xil_printf("\r\nNo config found for %d", XPAR_AXIDMA_0_DEVICE_ID);

}

return XST_FAILURE; }

//Initialize DMA

//Reads and sets all the available information

//about the DMA to the AxiDma variable

Status = XAxiDma_CfgInitialize(AxiDma, pCfgPtr); if (Status != XST_SUCCESS)

{

if (Demo.u8Verbose) {

xil_printf("\r\nInitialization failed %d"); }

return XST_FAILURE;

}

//Ensures that the Scatter Gather mode is not active

if(XAxiDma_HasSg(AxiDma)) {

if (Demo.u8Verbose)

{

xil_printf("\r\nDevice configured as SG mode");

} return XST_FAILURE;

}

//Disable all the DMA related Interrupts

XAxiDma_IntrDisable(AxiDma, XAXIDMA_IRQ_ALL_MASK, XAXIDMA_DEVICE_TO_DMA);

XAxiDma_IntrDisable(AxiDma, XAXIDMA_IRQ_ALL_MASK, XAXIDMA_DMA_TO_DEVICE);

//Enable all the DMA Interrupts

XAxiDma_IntrEnable(AxiDma, XAXIDMA_IRQ_ALL_MASK, XAXIDMA_DEVICE_TO_DMA); XAxiDma_IntrEnable(AxiDma, XAXIDMA_IRQ_ALL_MASK, XAXIDMA_DMA_TO_DEVICE);

return XST_SUCCESS; }

IIC.C

/******************************************************************************

* @file iic.c * Interrupt system initialization.

*

* @author Elod Gyorgy *

* @date 2015-Jan-3

* * @copyright

* (c) 2015 Copyright Digilent Incorporated

* All Rights Reserved *

* This program is free software; distributed under the terms of BSD 3-clause

* license ("Revised BSD License", "New BSD License", or "Modified BSD License")

29

* * Redistribution and use in source and binary forms, with or without modification,

* are permitted provided that the following conditions are met:

* * 1. Redistributions of source code must retain the above copyright notice, this

* list of conditions and the following disclaimer.

* 2. Redistributions in binary form must reproduce the above copyright notice, * this list of conditions and the following disclaimer in the documentation

* and/or other materials provided with the distribution.

* 3. Neither the name(s) of the above-listed copyright holder(s) nor the names * of its contributors may be used to endorse or promote products derived

* from this software without specific prior written permission.

* * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"

* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE

* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE

* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL

* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR

* SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER

* CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,

* OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

* @desciption * Contains interrupt controller initialization function.

* * <pre>

* MODIFICATION HISTORY:

* * Ver Who Date Changes

* ----- ------------ ----------- -----------------------------------------------

* 1.00 Elod Gyorgy 2015-Jan-3 First release *

* </pre>

* *****************************************************************************/

#include "intc.h" #include "xparameters.h"

XStatus fnInitInterruptController(intc *psIntc) {

int result = 0;

#ifdef XPAR_XINTC_NUM_INSTANCES // Init driver instance

RETURN_ON_FAILURE(XIntc_Initialize(psIntc, INTC_DEVICE_ID));

// Start interrupt controller

RETURN_ON_FAILURE(XIntc_Start(psIntc, XIN_REAL_MODE));

Xil_ExceptionInit();

// Register the interrupt controller handler with the exception table.

// This is in fact the ISR dispatch routine, which calls our ISRs Xil_ExceptionRegisterHandler(XIL_EXCEPTION_ID_INT,

(Xil_ExceptionHandler)XIntc_InterruptHandler,

psIntc);

#endif

#ifdef XPAR_SCUGIC_0_DEVICE_ID XScuGic_Config *IntcConfig;

/* * Initialize the interrupt controller driver so that it is ready to

* use.

*/ IntcConfig = XScuGic_LookupConfig(INTC_DEVICE_ID);

30

if (NULL == IntcConfig) { return XST_FAILURE;

}

result = XScuGic_CfgInitialize(psIntc, IntcConfig, IntcConfig->CpuBaseAddress);

if (result != XST_SUCCESS) {

return XST_FAILURE; }

#endif

//Xil_ExceptionEnable();

return XST_SUCCESS;

}

/*

* This function enables interrupts and connects interrupt service routines declared in

* an interrupt vector table

*/

void fnEnableInterrupts(intc *psIntc, const ivt_t *prgsIvt, unsigned int csIVectors) {

unsigned int isIVector;

Xil_AssertVoid(psIntc != NULL);

Xil_AssertVoid(psIntc->IsReady == XIL_COMPONENT_IS_READY);

/* Hook up interrupt service routines from IVT */

for (isIVector = 0; isIVector < csIVectors; isIVector++)

{ #ifdef __MICROBLAZE__

XIntc_Connect(psIntc, prgsIvt[isIVector].id, prgsIvt[isIVector].handler, prgsIvt[isIVector].pvCallbackRef);

/* Enable the interrupt vector at the interrupt controller */

XIntc_Enable(psIntc, prgsIvt[isIVector].id);

#else XScuGic_SetPriorityTriggerType(psIntc, prgsIvt[isIVector].id, 0xA0, 0x3);

XScuGic_Connect(psIntc, prgsIvt[isIVector].id, prgsIvt[isIVector].handler, prgsIvt[isIVector].pvCallbackRef);

XScuGic_Enable(psIntc, prgsIvt[isIVector].id);

#endif

} Xil_ExceptionInit();

Xil_ExceptionRegisterHandler(XIL_EXCEPTION_ID_INT, (Xil_ExceptionHandler)INTC_HANDLER, psIntc);

Xil_ExceptionEnable();

}

USERIO.C

/******************************************************************************

* @file userio.c *

* @authors Elod Gyorgy

* * @date 2015-Jan-15

*

* @copyright * (c) 2015 Copyright Digilent Incorporated

* All Rights Reserved

* * This program is free software; distributed under the terms of BSD 3-clause

* license ("Revised BSD License", "New BSD License", or "Modified BSD License")

31

* * Redistribution and use in source and binary forms, with or without modification,

* are permitted provided that the following conditions are met:

* * 1. Redistributions of source code must retain the above copyright notice, this

* list of conditions and the following disclaimer.

* 2. Redistributions in binary form must reproduce the above copyright notice, * this list of conditions and the following disclaimer in the documentation

* and/or other materials provided with the distribution.

* 3. Neither the name(s) of the above-listed copyright holder(s) nor the names * of its contributors may be used to endorse or promote products derived

* from this software without specific prior written permission.

* * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"

* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE

* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE

* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL

* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR

* SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER

* CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,

* OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

* @desciption *

* @note *

* <pre>

* MODIFICATION HISTORY: *

* Ver Who Date Changes

* ----- ------------ ----------- -------------------------------------------- * 1.00 Elod Gyorgy 2015-Jan-15 First release

*

* </pre> *

*****************************************************************************/

#include <stdio.h>

#include "xparameters.h"

#include "userio.h" #include "../demo.h"

#define USERIO_DEVICE_ID 0

extern volatile sDemo_t Demo;

void fnUpdateLedsFromSwitches(XGpio *psGpio);

XStatus fnInitUserIO(XGpio *psGpio) {

/* Initialize the GPIO driver. If an error occurs then exit */

RETURN_ON_FAILURE(XGpio_Initialize(psGpio, USERIO_DEVICE_ID));

/*

* Perform a self-test on the GPIO. This is a minimal test and only * verifies that there is not any bus error when reading the data

* register

*/ RETURN_ON_FAILURE(XGpio_SelfTest(psGpio));

/* * Setup direction register so the switches and buttons are inputs and the LED is

* an output of the GPIO

*/ XGpio_SetDataDirection(psGpio, BTN_SW_CHANNEL, BTNS_SWS_MASK);

fnUpdateLedsFromSwitches(psGpio);

32

/* * Enable the GPIO channel interrupts so that push button can be

* detected and enable interrupts for the GPIO device

*/ XGpio_InterruptEnable(psGpio, BTN_SW_INTERRUPT);

XGpio_InterruptGlobalEnable(psGpio);

return XST_SUCCESS;

}

void fnUpdateLedsFromSwitches(XGpio *psGpio)

{

static u32 dwPrevButtons = 0; u32 dwBtn;

u32 dwBtnSw;

dwBtnSw = XGpio_DiscreteRead(psGpio, BTN_SW_CHANNEL);

dwBtn = dwBtnSw & (BTNU_MASK|BTNR_MASK|BTND_MASK|BTNL_MASK|BTNC_MASK);

if (dwBtn==0){//No buttons pressed?

Demo.fUserIOEvent = 0;

dwPrevButtons = dwBtn;

return; }

// Has anything changed?

if ((dwBtn ^ dwPrevButtons)) {

u32 dwChanges = 0;

dwChanges = dwBtn ^ dwPrevButtons;

if (dwChanges & BTNU_MASK) {

Demo.chBtn = 'u'; if(Demo.u8Verbose) {

xil_printf("\r\nBTNU");

} }

if (dwChanges & BTNR_MASK) {

Demo.chBtn = 'r'; if(Demo.u8Verbose) {

xil_printf("\r\nBTNR");

} }

if (dwChanges & BTND_MASK) {

Demo.chBtn = 'd'; if(Demo.u8Verbose) {

xil_printf("\r\nBTND");

} }

if (dwChanges & BTNL_MASK) {

Demo.chBtn = 'l'; if(Demo.u8Verbose) {

xil_printf("\r\nBTNL");

} }

if (dwChanges & BTNC_MASK) {

Demo.chBtn = 'c'; if(Demo.u8Verbose) {

xil_printf("\r\nBTNC");

} }

// Keep values in mind //dwPrevSwitches = dwSw;

Demo.fUserIOEvent = 1;

dwPrevButtons = dwBtn; }

}

/*

33

* Default interrupt service routine * Lights up LEDs above active switches. Pressing any of the buttons inverts LEDs.

*/

void fnUserIOIsr(void *pvInst) {

XGpio *psGpio = (XGpio*)pvInst;

/*

* Disable the interrupt

*/ XGpio_InterruptGlobalDisable(psGpio);

/* * Check if the interrupt interests us

*/

if ((XGpio_InterruptGetStatus(psGpio) & BTN_SW_INTERRUPT) !=

BTN_SW_INTERRUPT) {

XGpio_InterruptGlobalEnable(psGpio);

return; }

fnUpdateLedsFromSwitches(psGpio);

/* Clear the interrupt such that it is no longer pending in the GPIO */

XGpio_InterruptClear(psGpio, BTN_SW_INTERRUPT);

/* * Enable the interrupt

*/

XGpio_InterruptGlobalEnable(psGpio); }

/************************************************************************/

/* */

/* demo.c -- Zybo DMA Demo

*/ /*

*/

/************************************************************************/ /* Author: Sam Lowe

Modified: Madhumitra . S*/

/* Copyright 2015, Digilent Inc. */

/************************************************************************/

/* Module Description: */

/*

*/ /* This file contains code for running a demonstration of the */

/* DMA audio inputs and outputs on the Zybo. */

/* */

/*

*/ /************************************************************************/

/* Notes:

*/ /*

*/

/* - The DMA max burst size needs to be set to 16 or less */ /*

*/

34

/************************************************************************/ /* Revision History:

*/

/* */

/* 9/6/2016(SamL): Created

*/ /*

*/

/************************************************************************/

DEMO.C #include "demo.h"

#include "audio/audio.h" #include "dma/dma.h"

#include "intc/intc.h"

#include "userio/userio.h" #include "iic/iic.h"

/***************************** Include Files *********************************/

#include "xaxidma.h"

#include "xparameters.h" #include "xil_exception.h"

#include "xdebug.h"

#include "xiic.h" #include "xaxidma.h"

#include "xtime_l.h"

#ifdef XPAR_INTC_0_DEVICE_ID

#include "xintc.h"

#include "microblaze_sleep.h"

#else

#include "xscugic.h" #include "sleep.h"

#include "xil_cache.h"

#endif

/************************** Constant Definitions *****************************/

/*

* Device hardware build related constants.

*/

// Audio constants // Number of seconds to record/playback

#define NR_SEC_TO_REC_PLAY 7

// ADC/DAC sampling rate in Hz

//#define AUDIO_SAMPLING_RATE 1000

#define AUDIO_SAMPLING_RATE 96000

// Number of samples to record/playback

#define NR_AUDIO_SAMPLES (NR_SEC_TO_REC_PLAY*AUDIO_SAMPLING_RATE)

/* Timeout loop counter for reset

*/ #define RESET_TIMEOUT_COUNTER 10000

#define TEST_START_VALUE 0x0

/**************************** Type Definitions *******************************/

35

/***************** Macros (Inline Functions) Definitions *********************/

/************************** Function Prototypes ******************************/

#if (!defined(DEBUG))

extern void xil_printf(const char *format, ...); #endif

/************************** Variable Definitions *****************************/

/*

* Device instance definitions */

static XIic sIic; static XAxiDma sAxiDma; /* Instance of the XAxiDma */

static XGpio sUserIO;

#ifdef XPAR_INTC_0_DEVICE_ID

static XIntc sIntc; #else

static XScuGic sIntc;

#endif

// // Interrupt vector table

#ifdef XPAR_INTC_0_DEVICE_ID

const ivt_t ivt[] = { //IIC

{XPAR_AXI_INTC_0_AXI_IIC_0_IIC2INTC_IRPT_INTR, (XInterruptHandler)XIic_InterruptHandler, &sIic},

//DMA Stream to MemoryMap Interrupt handler {XPAR_AXI_INTC_0_AXI_DMA_0_S2MM_INTROUT_INTR, (XInterruptHandler)fnS2MMInterruptHandler, &sAxiDma},

//DMA MemoryMap to Stream Interrupt handler

{XPAR_AXI_INTC_0_AXI_DMA_0_MM2S_INTROUT_INTR, (XInterruptHandler)fnMM2SInterruptHandler, &sAxiDma}, //User I/O (buttons, switches, LEDs)

{XPAR_AXI_INTC_0_AXI_GPIO_0_IP2INTC_IRPT_INTR, (XInterruptHandler)fnUserIOIsr, &sUserIO}

}; #else

const ivt_t ivt[] = {

//IIC {XPAR_FABRIC_AXI_IIC_0_IIC2INTC_IRPT_INTR, (Xil_ExceptionHandler)XIic_InterruptHandler, &sIic},

//DMA Stream to MemoryMap Interrupt handler

{XPAR_FABRIC_AXI_DMA_0_S2MM_INTROUT_INTR, (Xil_ExceptionHandler)fnS2MMInterruptHandler, &sAxiDma}, //DMA MemoryMap to Stream Interrupt handler

{XPAR_FABRIC_AXI_DMA_0_MM2S_INTROUT_INTR, (Xil_ExceptionHandler)fnMM2SInterruptHandler, &sAxiDma},

//User I/O (buttons, switches, LEDs) {XPAR_FABRIC_AXI_GPIO_0_IP2INTC_IRPT_INTR, (Xil_ExceptionHandler)fnUserIOIsr, &sUserIO}

};

#endif

/*****************************************************************************/ /**

*

* Main function *

* This function is the main entry of the interrupt test. It does the following:

* Initialize the interrupt controller * Initialize the IIC controller

* Initialize the User I/O driver

* Initialize the DMA engine * Initialize the Audio I2S controller

* Enable the interrupts

* Wait for a button event then start selected task * Wait for task to complete

*

* @param None *

36

* @return * - XST_SUCCESS if example finishes successfully

* - XST_FAILURE if example fails.

* * @note None.

*

******************************************************************************/ int main(void)

{

int Status;

Demo.u8Verbose = 0;

//Xil_DCacheDisable();

xil_printf("\r\n--- Entering main() --- \r\n");

//

//Initialize the interrupt controller

Status = fnInitInterruptController(&sIntc); if(Status != XST_SUCCESS) {

xil_printf("Error initializing interrupts");

return XST_FAILURE; }

// Initialize IIC controller

Status = fnInitIic(&sIic); if(Status != XST_SUCCESS) {

xil_printf("Error initializing I2C controller");

return XST_FAILURE; }

// Initialize User I/O driver Status = fnInitUserIO(&sUserIO);

if(Status != XST_SUCCESS) {

xil_printf("User I/O ERROR"); return XST_FAILURE;

}

//Initialize DMA

Status = fnConfigDma(&sAxiDma); if(Status != XST_SUCCESS) {

xil_printf("DMA configuration ERROR");

return XST_FAILURE; }

//Initialize Audio I2S

Status = fnInitAudio();

if(Status != XST_SUCCESS) { xil_printf("Audio initializing ERROR");

return XST_FAILURE;

}

{

XTime tStart, tEnd;

XTime_GetTime(&tStart);

do { XTime_GetTime(&tEnd);

}

while((tEnd-tStart)/(COUNTS_PER_SECOND/10) < 20); }

//Initialize Audio I2S

Status = fnInitAudio(); if(Status != XST_SUCCESS) {

37

xil_printf("Audio initializing ERROR"); return XST_FAILURE;

}

// Enable all interrupts in our interrupt vector table

// Make sure all driver instances using interrupts are initialized first fnEnableInterrupts(&sIntc, &ivt[0], sizeof(ivt)/sizeof(ivt[0]));

xil_printf("----------------------------------------------------------\r\n");

xil_printf("Zybo Z7-20 DMA Audio Demo\r\n"); xil_printf("----------------------------------------------------------\r\n");

xil_printf(" Controls:\r\n");

xil_printf(" BTN1: Record from MIC IN\r\n"); xil_printf(" BTN2: Play on HPH OUT\r\n");

xil_printf(" BTN3: Record from LINE IN\r\n");

xil_printf("----------------------------------------------------------\r\n");

//main loop

while(1) {

// Checking the DMA S2MM event flag if (Demo.fDmaS2MMEvent)

{ xil_printf("\r\nRecording Done...");

// Disable Stream function to send data (S2MM) Xil_Out32(I2S_STREAM_CONTROL_REG, 0x00000000);

Xil_Out32(I2S_TRANSFER_CONTROL_REG, 0x00000000);

Xil_DCacheInvalidateRange((u32) MEM_BASE_ADDR, 5*NR_AUDIO_SAMPLES);

//microblaze_invalidate_dcache();

// Reset S2MM event and record flag Demo.fDmaS2MMEvent = 0;

Demo.fAudioRecord = 0;

}

// Checking the DMA MM2S event flag

if (Demo.fDmaMM2SEvent) {

xil_printf("\r\nPlayback Done...");

// Disable Stream function to send data (S2MM)

Xil_Out32(I2S_STREAM_CONTROL_REG, 0x00000000);

Xil_Out32(I2S_TRANSFER_CONTROL_REG, 0x00000000); //Flush cache

Xil_DCacheFlushRange((u32) MEM_BASE_ADDR, 5*NR_AUDIO_SAMPLES);

//Reset MM2S event and playback flag Demo.fDmaMM2SEvent = 0;

Demo.fAudioPlayback = 0;

}

// Checking the DMA Error event flag

if (Demo.fDmaError) {

xil_printf("\r\nDma Error...");

xil_printf("\r\nDma Reset...");

Demo.fDmaError = 0; Demo.fAudioPlayback = 0;

Demo.fAudioRecord = 0;

}

// Checking the btn change event

if(Demo.fUserIOEvent) {

38

switch(Demo.chBtn) { case 'u':

if (!Demo.fAudioRecord && !Demo.fAudioPlayback)

{ xil_printf("\r\nStart Recording...\r\n");

fnSetMicInput();

fnAudioRecord(sAxiDma,NR_AUDIO_SAMPLES);

Demo.fAudioRecord = 1;

} else

{

if (Demo.fAudioRecord) {

xil_printf("\r\nStill Recording...\r\n");

} else

{

xil_printf("\r\nStill Playing back...\r\n");

}

}

break; case 'd':

if (!Demo.fAudioRecord && !Demo.fAudioPlayback)

{ xil_printf("\r\nStart Playback...\r\n");

fnSetHpOutput(); fnAudioPlay(sAxiDma,NR_AUDIO_SAMPLES);

Demo.fAudioPlayback = 1;

} else

{

if (Demo.fAudioRecord) {

xil_printf("\r\nStill Recording...\r\n");

} else

{

xil_printf("\r\nStill Playing back...\r\n"); }

}

break; case 'r':

if (!Demo.fAudioRecord && !Demo.fAudioPlayback)

{ xil_printf("\r\nStart Recording...\r\n");

fnSetLineInput();

fnAudioRecord(sAxiDma,NR_AUDIO_SAMPLES); Demo.fAudioRecord = 1;

}

else {

if (Demo.fAudioRecord)

{ xil_printf("\r\nStill Recording...\r\n");

}

else {

xil_printf("\r\nStill Playing back...\r\n");

} }

break;

case 'l': if (!Demo.fAudioRecord && !Demo.fAudioPlayback)

{

xil_printf("\r\nStart Playback..."); fnSetLineOutput();

fnAudioPlay(sAxiDma,NR_AUDIO_SAMPLES);

Demo.fAudioPlayback = 1; }

39

else {

if (Demo.fAudioRecord)

{ xil_printf("\r\nStill Recording...\r\n");

}

else {

xil_printf("\r\nStill Playing back...\r\n");

} }

break;

default: break;

}

// Reset the user I/O flag

Demo.chBtn = 0;

Demo.fUserIOEvent = 0;

}

}

xil_printf("\r\n--- Exiting main() --- \r\n");

return XST_SUCCESS;

}