If dac_valid is not a constant '1' it gets synchronized with the
dac_data_sync signal. This causes that dac_valid never asserts while
dac_data_sync is high, this way skipping the phase initialization.
ADRV9001 interfacing IP supports the following modes on Xilinx devices:
A B C D E F G H
CSSI__1-lane 1 32 80 80 2.5 SDR 8
CSSI__1-lane 1 32 160 80 5 DDR 4
CSSI__4-lane 4 8 80 80 10 SDR 2
CSSI__4-lane 4 8 160 80 20 DDR 1
LSSI__1-lane 1 32 983.04 491.52 30.72 DDR 4
LSSI__2-lane 2 16 983.04 491.52 61.44 DDR 2
Columns description:
A - SSI Modes
B - Data Lanes Per Channel
C - Serialization factor Per data lane
D - Max data lane rate(MHz)
E - Max Clock rate (MHz)
F - Max Sample Rate for I/Q (MHz)
G - Data Type
H - DDS Rate
CSSI - CMOS Source Synchronous Interface
LSSI - LVDS Source Synchronous Interface
Intel devices supports only CSSI modes.
De-assert dac_rst together with an updated control set.
This allows writing the control registers before releasing the reset.
This is important at start-up when stable set of controls is required.
De-assert adc_rst together with an updated control set.
This allows writing the control registers before releasing the reset.
This is important at start-up when stable set of controls is required.
Allow monitoring of non-PN patterns which have zeros in it.
e.g. nible-ramp, full range ramp.
Singular zeros got ignored if not out of sync, while OOS_THRESHOLD
consecutive zeros or non-matching data asserts the out of sync line.
Fix the *_ip.tcl scripts for axi_spi_engine and spi_engine_offload
module.
In case of a bool parameters the value_format and value properties must
be set for both user and hdl paramters. If not, in the generated verilog
code the tool will use "true" or "false" strings, instead of 0 or 1.
The input data path has a delay section that compensates for the ADC path delay.
By using a Dynamic Shift Registers coding style we can improve/change the
resource utilization on m2k:
Before After Resources
LUT 10097 10048 48 (0.28%)
LUTRAM 516 540 -24 (-0.4%)
FF 15285 14803 482 (1.37%)
The number of delay taps in the LA data path can be controlled manually, from
the regmap or automatically, according to the axi_adc_decimate's rate.
Moreover, because the rate is configure by software, and the time of
initialization, is different for the ADC path and LA path. There is an
uncertainty of plus/minus one sample between the two. Because ADC and LA
paths share the same clock we can easily synchronize the two paths. We
can't use reset, because the rate generation mechanism is different
between the two. So the ADC path is used as master valid generator and we
can use it to drive the LA path.
The synchronization is done by setting the rate source bit. This
mechanism can only be used if the desired rate for both path is equal,
including oversampling fom ADC decimation.
Adds information on:
- Log 2 of interface data widths in bits
- Interface type (0 - Axi MemoryMap, 1 - AXI Stream, 2 - FIFO ) .
Lets the driver discover interface widths and interface type settings,
this will deprecate the corresponding device tree properties.
This is useful in case of parametrized projects where the width of
the datapath is changing. This change will allow the use of a generic
device tree node.
Updated version to 4.3.a
Optimize the oversampling mechanism.
The behavior of the axi_dac_interpolate was changing if a debug module was
added to the core.
The current code has a better utilization and reliability.
When using an oversampling of 2 for axi_dac_interpolate the rate was
the same as with oversampling by 1(bypassing).
This commit removes the bypass for the ratio of 2.