Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding attention strategy #1470

Closed
wants to merge 3 commits into from

Conversation

moderouin
Copy link

This PR introduces two new strategies based on transformer-style neural networks with attention mechanisms:

  1. Attention: A base strategy with a randomly initialized neural network
  2. EvolvedAttention: A strategy using a pre-trained model optimized through self-play

These strategies represent a modern machine learning approach to the Prisoner's Dilemma, capturing complex patterns in game history through attention mechanisms rather than using hand-crafted rules.

The model processes the last 200 moves of both players, encoding game states (CC, CD, DC, DD) and using self-attention layers to identify patterns and make decisions. The implementation includes a complete neural network architecture with embeddings, attention layers, and classification components.

The pre-trained weights for the EvolvedAttention model are loaded from external data files.

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
@moderouin moderouin closed this Feb 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant