Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use-case article: Representation Learning on Graph Structured Data #25

Merged
merged 24 commits into from
Jan 2, 2024

Conversation

ricsi98
Copy link
Contributor

@ricsi98 ricsi98 commented Dec 18, 2023

This article covers two popular algorithms for node representation learning.

Outline:

  • Introduction to node representation learning + introducing demo dataset
  • Introduction to Node2Vec + code example on demo dataset
  • Introduction to GraphSAGE + code example on demo dataset
  • Conclusion: interpreting results and a final comparison of the two algorithms

@morkapronczay morkapronczay added the stage: content review PR under review of the high level content direction label Dec 19, 2023
@morkapronczay morkapronczay self-assigned this Dec 19, 2023
Copy link
Contributor

@morkapronczay morkapronczay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you very much, great work all in all, some minor suggestions

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please put SEO here and remove POST2 from the article title, these are published automatically.

The random walks are sampled according to a policy, which is guided by 2 parameters: return $p$, and in-out $q$.

- The return parameter $p$ impacts the likelihood of returning to the previous node. A higher p leads to more locally focused walks.
- The in-out parameter $q$ affects the likelihood of visiting nodes in the same or different neighborhood. A higher q encourages Depth First Search, while a lower q promotes Breadth First Search.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think if these are mentioned, they should be explained more, like one more sentence


Different types of information, like words, pictures, and connections between things, show us different sides of the world. Relationships, especially, are interesting because they show how things interact and create networks. In this post, we'll talk about how we can use these relationships to understand and describe things in a network better.

We're diving into a real-life example to explain how entities can be turned into vectors using their connections, a common practice in machine learning. The dataset we're going to work with is the a subset of the Cora citation network. It comprises 2708 scientific papers (nodes) and the connections indicate citations between them. Each paper has a BoW (Bag-of-Words) descriptor containing 1433 words. The challenge at hand involves predicting the specific scientific category to which each paper belongs to, selecting from a pool of seven distinct categories.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd propose a descriptive statistic about the dataset. Let's calculate cosine similarity between all items, and create a chart that shows:

  • bins of cosine similarity ranges in terms of BoW representations (1-0.98, 0.98-0.96, etc.)
  • against the probability (or just counts of pairs having or not having a citation connection on a 2 bidirectional barchart like this of having a citation between them)
    This would show how connected the 2 aspects are, how much information is there in incorporating both aspects into our vectors.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bow_cos

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the distribution of the pairwise cosine similarities.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

*In the second bullet point do you want to show how well the cosine similarities reflect connections in the graph?
I don't exactly get it how the plot should look like.
Additionally I can visualize the ROC curce of nodes being connected predicted based on BoW feature cosine similarity - that would tell us something like *.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added this part in the latest commit. For me it feels a bit odd, we should tell the reader why we need this statistic. Do you have any idea how to blend it in more to the "story line"?


The results are slightly worse (3%) than the results we got by combining Node2Vec with BoW features however, remember that with this model we can embed completely new nodes too. If our scenario requires inductiveness, GraphSAGE might be a better solution however, if we had a transductive setting, Node2Vec would give us a better solution.

## Conclusion
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't you think it would be worth embedding the text of the papers with some sentence transformer model also? And repeat the scenarios where it is concatenated to node2vec?
GraphSage works on the vectors, or does it embed the text itself? Because it could be worth adding it to that scenario as well. This is a reasonably sized, relatively well performing model.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Sure, I will try to do that. Unfortunately the torch_geometric dataset does not contain the text of the articles. However, I found the original data (from which the torch_geometric dataset should be derived) that contains paper extracts. I will try to match the paper IDs and embed the abstracts with the LLM.
  2. GraphSAGE uses the BoW features as input. Also we can try to train the sage model with the LLM features.

Copy link
Contributor

@morkapronczay morkapronczay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Loving this! I suggested 2 small typo changes, this can go to Robert!


In this plot, we divided the groups (shown on the y-axis) to have about the same number of pairs in each. The only exception was the 0-0.04 group, where lots of pairs had no similar words - they couldn't be split into smaller groups.

From the plot, it's clear that connected nodes usually have higher cosine similarities. This means papers that cite each other often use similar words. But when we ignore zero similarities, papers that have note cited each other seem to have a wide range of common words.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
From the plot, it's clear that connected nodes usually have higher cosine similarities. This means papers that cite each other often use similar words. But when we ignore zero similarities, papers that have note cited each other seem to have a wide range of common words.
From the plot, it's clear that connected nodes usually have higher cosine similarities. This means papers that cite each other often use similar words. But when we ignore zero similarities, papers that have not cited each other seem to have a wide range of common words.

@morkapronczay morkapronczay added stage: style review PR under review for style guide compliance ( https://hub.superlinked.com/contributing ) and removed stage: content review PR under review of the high level content direction labels Dec 21, 2023
ricsi98 and others added 3 commits December 21, 2023 12:03
Co-authored-by: Mór Kapronczay <mor.kapronczay@gmail.com>
Co-authored-by: Mór Kapronczay <mor.kapronczay@gmail.com>
@robertdhayanturner robertdhayanturner merged commit 035426e into superlinked:main Jan 2, 2024
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stage: style review PR under review for style guide compliance ( https://hub.superlinked.com/contributing )
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants