Skip to content

Conversation

@denera
Copy link
Collaborator

@denera denera commented Jan 27, 2026

Description

This PR integrates the Dao AI Lab's fused softmax-topK implementation from SonicMoE into the TE/PyTorch token router.

Install and run TE with NVTE_USE_SONIC_MOE=1 to switch the softmax-topK implementation.

Type of change

  • Documentation change (change only to the documentation, either a fix or a new content)
  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Infra/Build change
  • Code refactoring

Checklist:

  • I have read and followed the contributing guidelines
  • The functionality is complete
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

Signed-off-by: Alp Dener <adener@nvidia.com>
@denera denera requested a review from ptrendx January 27, 2026 22:12
@denera denera self-assigned this Jan 27, 2026
@denera denera force-pushed the pytorch/sonic-moe-integration branch from 159b8a9 to 1987e03 Compare January 27, 2026 23:00
…oE TopK integration

Signed-off-by: Alp Dener <adener@nvidia.com>
@denera denera force-pushed the pytorch/sonic-moe-integration branch from 1a89651 to e0e0f6f Compare January 28, 2026 21:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant