Replies: 14 comments
-
@Sacramento-20 postar artigos antigos |
Beta Was this translation helpful? Give feedback.
-
@Francisco-aragao next |
Beta Was this translation helpful? Give feedback.
-
Title: Announcing CVSS v4.0 Main idea: Integration: |
Beta Was this translation helpful? Give feedback.
-
2022 CWE Top 25 Most Dangerous Software Weaknesses.Source: Supplemental Details - 2022 CWE Top 25 Main ideaThe page provides details about the Vulnerabilities that are both common and severe will receive higher scores, indicating they should be addressed with higher priority. IntegrationNew metrics for ranking the IPs vulnerabilites can be created by:
|
Beta Was this translation helpful? Give feedback.
-
Name: Year: Context: Central Idea Discussed: Results: How We Can Apply These Ideas in Our Project: |
Beta Was this translation helpful? Give feedback.
-
Name: Year: Context: Central Idea Discussed: Results: How We Can Apply These Ideas in Our Project: |
Beta Was this translation helpful? Give feedback.
-
Name: Year: Context: Central Idea Discussed: Results: How We Can Apply These Ideas in Our Project: |
Beta Was this translation helpful? Give feedback.
-
@Sacramento-20 next |
Beta Was this translation helpful? Give feedback.
-
Title:PRIORITIZING VULNERABILITY RESPONSE: A STAKEHOLDER-SPECIFIC VULNERABILITY CATEGORIZATION Source:https://insights.sei.cmu.edu/documents/583/2019_019_001_636391.pdf Year:2019 Main idea:This article tries to compare CVSS (Common Vulnerability Scoring System) with a new approach, using SSVC (Stakeholder-Specific Vulnerability Categorization) based on decision Trees. So the main point is the methodology behind the analysis to understand if this structure can be used. CVSS uses technical severity as a fundamental concept, but the CVSS score does not inform decisions to be taken about it. Decisions are not a number, are qualitative actions that an organization can take. Even when we have an idea of how to act based on value X, it is unclear how to act about value X+1 for example. This is important to understand that a vulnerability has a unique interaction with the system, and the context in which it is inserted contributes to the analysis of its impact. Decision trees can be used to enumerate the decisions and outcomes, helping managers to direct the focus and set the priority, with some labels that vary from "defer" - do not act in present to "immediate" - act immediately. Other points to be considered are decision points (hypothesis to be tested), for example, a test of exploits, the technical impact of exploring the vulnerability, the utility of the exploit, and risks that this security failure can cause. In the end, there are discussed limitations and some tradeoffs to be considered in the use of SSVC, like the removal of numeric classification that can be "uncomfortable" to some people. Integration:This idea can be integrated to build a more refined analysis of the vulnerabilities, using the fact that now, more data are needed to make the classification. This can lead us to different paths of prioritization, based not just on a score, but on more specific data about the system and application. But together with this, more data has to be considered. Therefore, all the discussions are important to understand and provoke a reflection on how the CVSS works and how we can improve the results received. |
Beta Was this translation helpful? Give feedback.
-
TitleDetecção de intrusões em backbones de redes de computadores através da análise de comportamento com SNMP Sourcehttp://repositorio.ufsc.br/xmlui/handle/123456789/82569 Year2002 Main ideaThe main idea of the dissertation is the identification of intrusion in backbones, taking into account network management variables. The dissertation focuses on the identification of intrusions in backbones, considering network management variables through the SNMP protocol. The focus is on the analysis of manageable routing equipment located in the core and subnets of a large backbone. Given the high network traffic of these devices, network management tools are essential. However, detecting a device under attack or being used as a bridge for an attack is a complex task. The main variables to establish a baseline in the routers include: CPU consumption, queue drops, queue buffers, packet transfer rate on a specific interface, and percentage of memory usage. All this information is organized chronologically and stored in a database for future analysis. To facilitate the consultation of these data over time, a web interface was developed for the operator. IntegrationThis dissertation is useful in addressing which parameters should be considered for better equipment management in a network. It is a research focused on prioritizing vulnerabilities and recommending possible attack mitigation techniques. Understanding how consolidated network protocols work can be extremely useful, depending on the context of the company being audited. There is a challenge related to a company’s structure to collect and store this type of data. Although it is not necessarily a mandatory analysis method for the project, creating modules that perform this analysis for companies with this structure can be very useful. This includes the detection of other vulnerabilities, especially when considering the combination of existing methods with what was mentioned in the dissertation. |
Beta Was this translation helpful? Give feedback.
-
Title:Web pages classification: An effective approach based on text mining techniques, by Seyed Moein Babapour, Meysam Roostaee Year:2017 Source:https://ieeexplore.ieee.org/document/8324994 Main idea:The article comments on different approaches involving machine learning techniques to explore the problem of web page classification. First of all, the initial steps involve preprocessing the text, like converting it to lowercase, removing numbers, and words with 1-2 characters, and removing stop words (words without individual meaning - like: and, the ...). Other processes commented on are lemmatization and stemming, which are related to the technique that reduces inflected words to their base forms. After this, different models were used, each of them with differences in the algorithm (like Linear Regression, Naive Bayes ...), in preprocessing stages (using more or fewer techniques), and different HTML fields used to make the classification (body, URL, title ...). In the end, the results are compared using ROC Curve and AUC, a technique used to evaluate machine learning algorithms, showing that more preprocessing steps improve the true prediction rate score. Integration:Reading the article is possible to know more about machine learning algorithms and mainly about the importance of preprocessing steps. By utilizing the latest preprocessing techniques, it's possible to extract the most crucial information and significantly reduce the size of the text. With this, can be possible to develop and test some models to work in the webpage classification, considering the results presented. |
Beta Was this translation helpful? Give feedback.
-
Title:Toward Website Classification 2023, by Mohamed Zohir Koufi; Zahia Guessoum; Amor Keziou; Itheri Yahiaoui; Chloé Martineau; Wandrille Domin Year:2023 Source:https://ieeexplore.ieee.org/document/10350143 Main idea:In general, the article addresses different techniques for classifying web pages and comparing the results. The article emphasizes the pre-processing stages and their importance, involving different approaches such as keyword analysis and techniques such as Bag Of Words (machine learning model for representing words) and Term Frequency-inverse Document Frequency (method to measure the importance of a word in a collection of documents.) Furthermore, other deep learning techniques are approached for representing words and documents based on the surrounding words. Thus, algorithms such as (support vector machine), Naive Bayes, and BERT (Transformer-based model) are used to perform the classification and subsequent comparison of results with the analysis of different parts of the texts. Integration:The article emphasizes through the text the importance of machine learning algorithms to execute the process of web page classification. It is commented that this technique can be useful in the pre-processing steps and in the classification itself. With this in mind, this topic can be studied and implemented in this project, helping in the task of making the webpage classification, which will be useful to understand more the datasets available and create a model to make the vulnerability prioritization. |
Beta Was this translation helpful? Give feedback.
-
Title:Genre Categorization of Web Pages by Jebari Chaker1 and Ounelli Habib Year:2007 Source:Main idea:The document is related to a new kind of webpage classification, for example, is said that to identify a scientific paper, the page needs to have a complex language, low subjectivity, and the presence of graphics. Are commented too about the features needed to make the automatic categorization, like genre-specific words, tense of webs, HTML tags, layout and keywords, and other types of content. To consider all this kind of context in the web page, machine learning algorithms need to be used in the process, like Naive Bayes, KNN, and Decision Trees. Integration:The article can be used related to the techniques and features considered in web classification like preprocessing, focus on URL and some HTML tags, and also the page representation using vectors, which allows applying cosine similarity. Also, the article introduces a new concept about webpage genres, that can aggregate more features involving context, text size, quantity of images, and other fields. This technique can be a good tool to improve the web page classification task, helping to understand more about the IPs scanned, and helping the process of vulnerability prioritization. |
Beta Was this translation helpful? Give feedback.
-
Title:A review of web page classification by Ayodeji Osanyin, Olufunke Oladipupo, Ibukun Afolabi Year:2018 Source:Main idea:The articles try to introduce the problem of webpage classification and its importance in web search engines and web filtering. With the huge number of internet content today, this process must be automated with classification algorithms. This work can be done with some techniques like Nearest Neighbors and Decisions Tree but is commented that the success of the process depends on the pre-processing made in the text and the quality of training data. It comments on the steps of classification, going through the extraction of the content, pre-processing, building vectors to represent pages, creating the model, and finally evaluating the classifier by recall and precision metrics. Integration:The article can be used in the context of web page classification according to the different stages in the task. In the beginning, is useful the HTML fields related to the article, like metadata, keywords and description, and URL and page content that is used in the classification system. These fields are in Shodan data, so we can start to focus on them in our analysis. Furthermore, is important to know about the importance of the preprocessing steps, like lemmatization, stemming, and removing stop words. These steps can shrink the space of words that occur in the text and focus the analysis in a minor set so is possible to perform better actions in data. The use of ML algorithms is also very useful, so we can start to think about |
Beta Was this translation helpful? Give feedback.
-
Let's use this discussion thread to discuss articles about vulnerability prioritization. For each article, let's use the following template:
Beta Was this translation helpful? Give feedback.
All reactions