Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding new SKU Mellanox-SN4600C-C4 #14

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

madhanmellanox
Copy link
Owner

@madhanmellanox madhanmellanox commented May 26, 2021

Why I did it

I did it to add a new SKU Mellanox-SN4600c-c64

How I did it

I developed the SKU based on SKU definitions with the following requirements and tested it on Mellanox 4600C switch.

Port configuration:
• Breakout mode for each port: No
• Speed of the port: 100G
• Auto-negotiation enable/disable: No setting required
• FEC mode: No setting required
• Type of transceiver used: Not needed

Buffer configuration
• Shared headroom enable
• If shared headroom enabled what is the over-subscription ratio as in SN3800
• Dynamic Buffer disable
• In static buffer scenario how many uplinks and downlinks? as in SN3800
• 2km cable support required? no

Switch configuration
• Warmboot enabled? yes
• Should warmboot be added to SAI profile when enabled? yes
• Is VxLAN source port range set? yes
• Should Vxlan source port range be added to SAI profile when set. as in SN3800
• Is Static Policy Based Hashing enabled? no

Number of Uplinks/Downlinks:

  • t0: 32 100G down links and 32 100G up links.
  • t1: 56 100G down links and 8 100G up links.

How to verify it

Set the SKU in config_db.json to Mellanox-SN4600C-C64 and test the 100G ports coming up on the switch.

Which release branch to backport (provide reason below if selected)

  • 201811
  • 201911
  • 202006
  • 202012

Description for the changelog

Changes are in sonic-buildimage/device/mellanox/x86_64-mlnx_msn4600c-r0/Mellanox-SN4600C-C64/ folder.

A picture of a cute animal (not mandatory but encouraged)

Copy link

@liat-grozovik liat-grozovik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please provide more details of the SKU definition in the description:
buffer model: dynamic/static, shared headroom yes/no. Based on that please ask Stephen to review and approve.

Copy link

@liat-grozovik liat-grozovik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note: please check it can be cleanly cherry picked to 202012 and if so, add a comment to add it to it as well

@stephenxs
Copy link

Suggest adding the SKU port info like this:

  - For SN4600C platform:
    - C64:
      - t0: 32 100G down links and 32 100G up links.
      - t1: 56 100G down links and 8 100G up links.

You can refer PR 7337 regarding the description.

@stephenxs
Copy link

The buffer pool sizes and SHP sizes are correct.

@madhanmellanox
Copy link
Owner Author

Note: please check it can be cleanly cherry picked to 202012 and if so, add a comment to add it to it as well

In master branch hwsku.json file exists in the SKU folder, but in 202012 branch the file does not exist, so it can not be cherry picked. A new PR for 202012 branch has to be created. I will do it, once master branch, merge happens.

@madhanmellanox
Copy link
Owner Author

Suggest adding the SKU port info like this:

  - For SN4600C platform:
    - C64:
      - t0: 32 100G down links and 32 100G up links.
      - t1: 56 100G down links and 8 100G up links.

You can refer PR 7337 regarding the description.

Addressed it.

@madhanmellanox
Copy link
Owner Author

Please provide more details of the SKU definition in the description:
buffer model: dynamic/static, shared headroom yes/no. Based on that please ask Stephen to review and approve.

Addressed it.

@madhanmellanox
Copy link
Owner Author

@liat-grozovik can I take it for approved internally and raise a PR against Azure master? Please let me know.

Copy link

@stephenxs stephenxs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approve the buffer configuration.

Copy link

@stephenxs stephenxs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Originally, I focused on the numbers when I was reviewing this PR and didn't realize there are errors in micro names.
@madhanmellanox please adjust it according to my suggestion and open a new PR.
Sorry for the inconvenient

@@ -0,0 +1,112 @@
{% set default_cable = '5m' %}
{% set ingress_lossless_pool_size = '53379072' %}
{% set ingress_lossy_pool_size = '1540096' %}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
{% set ingress_lossy_pool_size = '1540096' %}
{% set ingress_lossless_xoff_size = '1540096' %}

"BUFFER_POOL": {
"ingress_lossless_pool": {
{%- if dynamic_mode is not defined %}
"size": "{{ ingress_lossless_pool_size }}",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"size": "{{ ingress_lossless_pool_size }}",
"size": "{{ ingress_lossless_pool_size }}",
"xoff": "{{ ingress_lossless_xoff_size }}",

"BUFFER_POOL": {
"ingress_lossless_pool": {
{%- if dynamic_mode is not defined %}
"size": "{{ ingress_lossless_pool_size }}",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"size": "{{ ingress_lossless_pool_size }}",
"size": "{{ ingress_lossless_pool_size }}",
"xoff": "{{ ingress_lossless_xoff_size }}",

"type": "ingress",
"mode": "dynamic"
},
"ingress_lossy_pool": {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ingress_lossy_pool section should be removed

"dynamic_th":"7"
},
"ingress_lossy_profile": {
"pool":"[BUFFER_POOL|ingress_lossy_pool]",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"pool":"[BUFFER_POOL|ingress_lossy_pool]",
"pool":"[BUFFER_POOL|ingress_lossless_pool]",

"BUFFER_PORT_INGRESS_PROFILE_LIST": {
{% for port in port_names.split(',') %}
"{{ port }}": {
"profile_list" : "[BUFFER_PROFILE|ingress_lossless_profile],[BUFFER_PROFILE|ingress_lossy_profile]"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"profile_list" : "[BUFFER_PROFILE|ingress_lossless_profile],[BUFFER_PROFILE|ingress_lossy_profile]"
"profile_list" : "[BUFFER_PROFILE|ingress_lossless_profile]"

"BUFFER_POOL": {
"ingress_lossless_pool": {
{%- if dynamic_mode is not defined %}
"size": "{{ ingress_lossless_pool_size }}",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"size": "{{ ingress_lossless_pool_size }}",
"size": "{{ ingress_lossless_pool_size }}",
"xoff": "{{ ingress_lossless_pool_xoff }}",

"type": "ingress",
"mode": "dynamic"
},
"ingress_lossy_pool": {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ingress_lossy_pool section should be removed

"BUFFER_PORT_INGRESS_PROFILE_LIST": {
{% for port in port_names.split(',') %}
"{{ port }}": {
"profile_list" : "[BUFFER_PROFILE|ingress_lossless_profile],[BUFFER_PROFILE|ingress_lossy_profile]"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"profile_list" : "[BUFFER_PROFILE|ingress_lossless_profile],[BUFFER_PROFILE|ingress_lossy_profile]"
"profile_list" : "[BUFFER_PROFILE|ingress_lossless_profile]"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants