Change from C51 board back to Arduino

The C51 board helped me to quickly find the problem in the code for STM32. However, after some consideration, it’s not suitable for the task as remote controller agent connecting to PC (with USB2TTL converter). It has the basic controls over individual PINs and interrupt routines and everything, but effect to be put to make it work with UART and SPI interface together to have a proper communication protocol is just painful (for now), and the documentation and official support for this board/chip (it’s a custom-design board) is not so good. Most of all, the C51 board is difficult to be extended in future. While arduino is much better considering these disadvantages, pity though, the frequency is much lower than the C51 board (8/16 MHz comparing to 42MHz). But the coding should be simpler.


Basic SPI library in Arduino + NRF24L01+ =>

1. Receive data from UART and send through NRF24L01+ (SPI)

2. Receive ACK with Payload, send to UART

3. Interrupt instead of Polling

NRF24L01+ two way communication

To achieve two way communication with NRF24L01+ modules, there are basically two ways:

1. Switch the role of PRX and PTX when it’s necessary, delay, idle time etc. need to be managed accordingly (may be tricky when complex and reliable communication protocol is to be implemented).
2. use ACK with PAYLOAD feature with Enhanced ShockBurst.
The drawback with this one is that since PRX won’t know what the ACK Payload should be before the MCU can download and handle the data from PTX, so that the Payload in ACK will be one package after the correct package. However, with proper protocol it could be usable.
Later I will investigate both the solutions and see which is easier to be implemented and which has the better performance regarding the communication speed.

BS2. Learning ARM STM32 Development

STEP2. Test/Modify the example project from the firmware. [Check]
Get I2C/SPI/UART working.
Well, I tried an example project for blinking the two LED lights on the board, it’s working, debug mode OKAY.
However, it’s more complicated to work with I2C/SPI/U(S)ART as pins may be shared, and the names on the PCB is not directly readable, mannual reference is needed.
I need to finish the document of the architecture of the MCU first (130 Pages :(). Meanwhile I will check if I can modify the “printf” example project to work with the USB2Serial component I have now. If yes, then next step is to make I2C work with some sensors with I2C interface I have now.
STEP3. PRINTF with UART [____]

Learning ARM STM32 development

I have had this MINISTM32 board for a long time, but never started to try some application on it.

I did try to start with ARM Cortex-M3 datasheet, but it’s very difficult to start with if the target is to get the some dummy applications running ASAP.
I also had some example projects provided by the saler, but the structures of these example projects are not easy to capture for a total beginner.
So I’m making a baby-step plan (as indicated by an online tutorial).
STEP1. Get user mannual of the MCU (which is STM32F103ZE), and the firmware from ST.   [Check]
The documents can be got from:
The firmare:
This is a firmware from ST itself, providing library (example projects) for using standard phipherals (as I’m trying to get the board interacting with some sensors I have, gyro with I2C, camera with SPI etc).
All the firmwares for STM32 MCUs are listed here:
Just in case I need firmware for SDCard, Flash, USB later etc. 
STEP2. Test/Modify the example project from the firmware. [____]
Get I2C/SPI/UART working.

Fault Injection Simulation on miniMIPS with ModelSim

Recently, I changed from ModelSim from Mentor Graphics to VCS_MX from Synopsys to run the fault injection simulation campaign.
Before, the fault injection is done through the force (noforce) command in the script supported by ModelSim (and examine command if the fault to be injected is Bit Flip). Though this approach works fine, but the speed is slow. I made a wrong decision of using C to generate the script and start the fault injection simulation, which involves a lot string handling, and I was not able to do that elegantly in a short time.
And then again, the DUT we used in the group is miniMIPS (kinda outdated, not actively updated in openCores), the original ROM/RAM module is size-limited, and to enlarge the size directly in VHDL code will cause simulator to infer large register array and leads to huge memory cost and simulation speed decrease (it happens to me several times that ModelSim report out of memory).
Another approach is to use foreign language interface to emulate the behavior of memory module:
1) the external program will communicate with the DUT directly: read outputs from DUT and assign inputs to DUT just as a VHDL module. Timing, delay etc. are directly controllable in the external program.2) Use a VHDL wrapper, in which external functions are called as memory read/memory write. Timing, delay controlled in the VHDL wrapper (easier than the first solution) as following.
————in the VHDL wrapper ————————

if r_w /= '1' then -- Reading in memory
    read_mem(vAddr, vData);
    data_inout <= std_logic_vector(to_signed(vData, 32));
    vData := to_integer(signed(data_inout));
    write_mem(vAddr, vData);

———–the foreign function declarations in VHDL———–

package memc_pkg is 
procedure read_mem(addr : in integer; -- integer address (32bit); 
data : out integer); -- integer data out (32bit instruction) 
attribute foreign of read_mem : procedure is "read_mem ./"; 

procedure write_mem(addr : in integer; -- integer address (32bit); 
data : in integer); -- integer data out (32bit instruction) 
attribute foreign of write_mem : procedure is "write_mem ./"; 

procedure write_img;
attribute foreign of write_img : procedure is "write_img ./"; 

procedure reload_program;
attribute foreign of reload_program : procedure is "reload_program ./"; 

package body memc_pkg is 
procedure read_mem(addr : in integer;data : out integer ) is 
assert false report "ERROR: foreign subprogram read_mem not called" severity note; 

procedure write_mem(addr : in integer;data : in integer ) is 
assert false report "ERROR: foreign subprogram write_mem not called" severity note; 

procedure write_img is 
assert false report "ERROR: foreign subprogram write_img not called" severity note; 

procedure reload_program is
beginassert false report "ERROR: foreign subprogram clean_ram not called" severity note;


——————Example Makefile for C program —————-

 .PHONY: cleanall : 

memc.o: mem_v2.c
gcc -c -g -I$(MTI_HOME)/include $^ -o $@ -fPIC -fno-stack-protector : memc.o -lsqlite3ld -shared -E -o $@ $^ 

rm -f memc.o 

$MTI_HOME should be set to point to the modeltech folder of your ModelSim installation.

Then to get all the fault site to inject, for example, SA0/1, dump a VCD file, and parse the VCD file afterwards to get all the signals in the design you can use force/noforce and examine commands. (This is not exactly the fault site list, however, can be used as see fit)To get fault sites for Bit Flip faults (at Gate Level), extract all the signals end with “/Q” in the design (this depends on the library used, all and only FF in the library we used will end with “/Q”, I’m not sure it’s true with other library).
Generate the script to be executed by ModelSim to inject faults and collect results.

Problem:The restart command by ModelSim may not be enough (or I missed some arguments to the restart command) to release all memory (or my external C program contains memory leak), memory consumption increase along the fault injection simulation campaign, so I split the script to run 500-1000 fault inject and then quit vsim and start again.

Turning to KDE

As I’m trying to finish my current project in VHDL recently, I found that kate is the most handy editor for me. Yes, I know vim is powerful and I did try to use vim, however, I’m not very familiar with it, and it requires some configuration to be done to enable VHDL folding function. So I’m turning to use KDE. The thing is that I always try to make things like email client and IM to work when I change OS or desktop enviroment, KDE is not so user friendly as gnome, so it takes some time to do the configuration.

I’m using KDE 4.7.x latest one 🙂

To get KMail to work:

kmail terminates during startup with “Failed to fetch the resource collection”

To use google calendar:

Accessing Google Calendar from KOrganizer – Occasional Thoughts (This I didn’t do, too troublesome)

alternative is to use davcal:



Using SRAM as FrameBuffer on DE2 115

As I need to use NiosII processor in my project and to run a program on it, so QSys is to be used and SDRAM is used for program code and data. And image captured from the D5M should be displayed through a VGA monitor, so a frame buffer is required to store the image data, and the best choice is to use SDRAM.

It’s a 16bit SRAM on DE2 115 and it requires 24bit to store a 8-8-8 bit RGB pixel data, so solutions:

1. change to use 5-6-5 16bit RGB data, and reduce the time to access SRAM

2. change the custom logic in SRAM controller (in fact, I need to write the SRAM controller all by myself in VHDL), to allow 32bit access (in two cycles).

There’s a problem is that both D5M controller and VGA controller will try to access SRAM, and probably at the same time, so logic needs to be added to resolve conflict. again solutions:

1. implement only one read/write slave port in SRAM controller, and share it between the D5M controller and VGA controller’s master interface. And let the arbitration provided by QSys to resolve the conflict. This is not a good idea as QSys use fairness-based, round-robin to resolve the conflict. However in this case, read request from VGA controller has a limited response time, while write request from D5M controller can wait for a couple of clock cycles.

2. implement two (one read and one write) slave interface in SRAM conflict, and use FIFO to make write request wait for read request. FIFO can use the Megafunction implementation.

Tomorrow I’ll try to implement the second solution.


On Friday, I’ll have an oral test for "Specification and Simulation of Digital Systems", hopefully it will go well this time.


Updates on this part of the project will be post in this post later on.




Today (or Yesterday) I rewrote the SRAM controller using separated Avalon MM read slave interface and write slave interface for D5M controller and VGA controller, However, now the thing is that the content written into the SRAM cell is not exactly what I intend to write, and it seems like a random error stuff.

I’ve already tried to use signal tap II to debug, but it’s not so suitable for this as some clock’s are involved and it requires to recompile the project every time (quiet slow). I’ll try another tool (Signal Probe) later.

Today I’ve got an oral test. Pray the professor not to be harsh. 😉



Oral test went very well. Though it’s just two simple questions, I didn’t know the answer of the very first question asked by the professor, I took a shot and guessed the right answer, 🙂

With another (this) course cleaned up, I’ve more time to do the DE2.115 project. But not today.



Damn It. The timing for Avalon Memory Mapped Read Salve of the SRAM controller was wrong, the content wrote to SRAM is right now, but the result read by VGA controller is not correct as I was using process to implement the logic and synchronized to the CLK signal. Need to re-wrote the SRAM controller again. 😦



Finally the system is finished. SRAM is used in the system as a frame buffer, use one single read/write cycle to read/write a 16bit RGB data from/to SRAM. The problem is that SRAM will not work @100MHz, and either @80MHz. 50MHz is working, and is enough for our project. And SRAM is directly accessed through the custom IP instead of the Avalon Memory Mapped interface, as in this way the read requests from the VGA controller and write requests from D5M camera can be correctly arranged by time.