Navegando por Assunto "Redes de computadores"
Agora exibindo 1 - 10 de 10
- Resultados por Página
- Opções de Ordenação
Item Análise de dados de redes Wi-Fi por meio de redes de correlação(2021-12-17) Ferreira, Anderson dos Santos; Goncalves, Glauco Estacio; Medeiros, Victor Wanderley Costa de; http://lattes.cnpq.br/7159595141911505; http://lattes.cnpq.br/6157118581200722The growing use of Wi-Fi networks has generated a large volume of data that allow studies to analyze human behavior. One of the ways to study these data is through a complex network created from correlation coefficients. Data from the UFRPE Wi-Fi network were collected during 6 days, the creation of time series allowed the analysis of the occupation graphs of the headquarters buildings, thus enabling the creation of complex networks based on correlation coefficients. The occupancy graphs showed the times and buildings with the highest occupancy for each day, while the complex network metrics showed a strong correlation of buildings with different degree values.Item Computação conjunta e alocação de recurso para offloading em computação de borda: um mapeamento sistemático(2023-09-15) Lócio, Daniel Mariz; Domingues, Jeísa Pereira de Oliveira; http://lattes.cnpq.br/1291084760973085; http://lattes.cnpq.br/3920880960316221This work presents a systematic mapping of articles published between the years 2016 and 2023 on Offloading in edge computing, or Multi-access Edge Computing (MEC), considering the aspects of joint computation and resource allocation. MEC is a technology that aims to reduce latency and increase efficiency by processing data close to its source, a necessary approach for the future of computer networks. Based on the proposed mapping, this present work discusses various techniques, methods, and models from the reviewed articles. In this work, the main challenges faced in this area are studied, as well as the approaches proposed by the analyzed articles. The result of this mapping is a classification of the field, providing a comprehensive and detailed overview of recent advancements in edge computing, with special attention given to joint computation and resource allocation solutions that enhance the performance and efficiency of the offloading technique.Item Desenvolvimento de um plug-in para a replicação de dados entre os sistemas NetBox e ServiceNow CMDB(2023-05-03) Silva Júnior, Manassés Júlio da; Gouveia, Roberta Macêdo Marques; http://lattes.cnpq.br/2024317361355224This work presents a new plugin developed to integrate the open source network configuration software NetBox with the ServiceNow CMDB. This plugin extends the functionality of NetBox, allowing users to send NetBox data to the ServiceNow API. NetBox is an open source web application that helps manage and document computer networks. It acts as a centralized repository for network infrastructure information, including device inventory, IP address management, cable management, and power management. On the other hand, the ServiceNow CMDB is a central repository that contains information about all assets and configuration items in an organization's IT infrastructure. The integration between these platforms is achieved through the creation of plugins that extend the functionality of NetBox, allowing it to work together with the ServiceNow CMDB. The project uses Python as the main language, the Django web framework, and Docker to create the development environment. Overall, this project provides a powerful and flexible tool for network administrators and operators to manage their network infrastructure. The plugin architecture follows the Django MTV (Model-View-Template) architecture, where the Model represents the data and database schema, the View handles requests and responses, and the Template generates the HTML output. The main functionality of the project is the automatic replication of Create, Read, Update and Delete (CRUD) modifications in selected objects from NetBox to the ServiceNow CMDB, done through the ServiceNow API. This automatic replication feature uses Webhooks to monitor object modifications, and the plugin automatically handles the creation and deletion of Webhooks. Other features include manual batch and simulation for replicating data to the CMDB. The visual interface of the plugin is simple and focused on its functionalities.Item Detecção e modelagem de ameaças persistentes avançadas na fase de movimentação lateral: uma abordagem com process mining(2025-03-20) Silva, Jonathas Felipe da; Lins, Fernando Antonio Aires; Lima, Milton Vinicius Morais de; http://lattes.cnpq.br/3409150377712315; http://lattes.cnpq.br/2475965771605110; http://lattes.cnpq.br/1017193816402551A crescente ameaça de ataques cibernéticos complexos tem exigido estratégias avançadas de defesa, especialmente na detecção precoce de atividades suspeitas em redes comprometidas. Com isso, Ameaças Persistentes Avançadas (APTs) representam um desafio significativo para a segurança cibernética, caracterizando-se por ataques sofisticados e direcionados. Este trabalho tem como objetivo investigar a movimentação lateral dentro de redes comprometidas, utilizando mineração de processos para detectar padrões suspeitos de comportamento. Para isso, foi configurado um ambiente experimental com máquinas virtuais simulando um ataque APT. Logs do sistema e do Wazuh registraram as atividades, possibilitando a extração de eventos relevantes para o presente estudo. A metodologia consiste na coleta de dados em dois cenários: uso normal e ataque, seguida pela aplicação de algoritmos de Process Mining, como AlphaMiner, através da biblioteca pm4py. Com isso, foi possível identificar diferenças estruturais entre os processos normais e aqueles manipulados pelo invasor, possibilitando a criação de indicadores de comprometimento (IoCs). Os resultados contribuem para a melhoria de mecanismos de detecção e resposta a APTs, auxiliando na proteção de redes corporativas contra ataques avançados.Item DmzVisor: uma abordagem para segurança de zonas desmilitarizadas corporativas em redes definidas por software(2018) Pereira, Valson da Silva; Sena, Ygor Amaral Barbosa Leite de; http://lattes.cnpq.br/2441367990383979; http://lattes.cnpq.br/1271684226502864The advance in the use of the world-wide network of computers in the last decades made theinternet become one of the main communication tools around the world. However, the morecompanies provide services via the web, the more they tend to expose their important informationin the network in some way. In this way, there is a need for concern about the safety of theseservices. One of the recommendations for data security of organizations providing externalservices is to use a demilitarized zone (DMZ), which consists of the use of one or more firewallsin their configuration, but can have a considerable financial cost for the company. In addition,many firewall equipment is provided as a black box of proprietary solutions with embeddedsoftware by the manufacturer, so they are few flexible from a personalization point of view.However, through the Software-Defined Networking (SDN) paradigm along with the OpenFlowprotocol, which allows for flexibility in developing software solutions for networks, one has thebenefit of offering a software product as a low-cost alternative and customizable. Through theSDN, the application can be implemented in a high-level programming language and makinguse of free open source tools, so that it also enables the maintenance of the software to suitthe needs of the client. As a result, the overall goal of this work was to develop a corporateSDN firewall as a low-cost alternative, using open source tools, which acts as a packet filter andisolates traffic between the local network and the demilitarized zone , through SDN rules andthe secure implementation of protocol messages such as Dynamic Host Configuration Protocol(DHCP) and Address Resolution Protocol (ARP). In addition to the development of packetfiltering mechanisms for the network and transport layers and to provide more security throughnetwork isolation, a friendly web graphic user interface has also been developed in which theadministrator is able to manage the creating high-level firewall rules, so the user does not needto be aware of the OpenFlow protocol. The evaluation of the proposal was composed by 02scenarios using 6 machines virtualized by VirtualBox. The proposal demonstrated that bothits ARP and DHCP security rules and the firewall rules are effective in the networks that areprotected by the prototype proposed in this work, being able to avoid man in the middle attacks,IP and MAC address spoof, as well as perform filtering and routing of network and transportlayer packets securely.Item Estratégias de sucesso em sites de financiamento coletivo(2019) Lyra, Luiz Guilherme Carriço Villachan; Xavier, Leonardo Ferraz; http://lattes.cnpq.br/6143161358329055The constant and accelerated technological evolution has drastically changed the human society, both in its knowledge, in its means of production and its social relations. The evolution of microcomputing and the advent of the Internet contributed to the birth of the so-called Network Society era, in which information transmission and communication are instantaneous and completely independent of geographical distances. One of the facets of human socialization affected by the Network Society is solidarity, which has taken a new form in crowdfunding sites. Thus, the present work focuses on the theme: “Success Strategies on Collective Financing Websites.” The aim of this paper is to understand the variables involved in successful crowdfunding campaigning. To this end, an exploratory research based on a literature review was conducted while applying a questionnaire on crowdfunding on the Survey Monkey research site. It concluded that a research contributes to the crowdfunding theme due to literature review and research.Item Projeto integrado de redes ópticas de longa distância e Metropolitanas usando algoritmos de inteligência computacional: estudo de caso para o estado de Pernambuco(2017) Nascimento, Jorge Candeias do; Araújo, Danilo Ricardo Barbosa de; http://lattes.cnpq.br/2708354422178489; http://lattes.cnpq.br/8065833426856653Nowadays, several network technologies with different prices and adaptations are appearing in the market. A network topology project involves several metrics; the metrics are used to evaluate a project. In the evaluation we use metrics such as robustness metrics (which help in the network’s ability to recover from a failure), blocking probability and energy consumption. The best way to optimize infrastructure in a network design would be to use the latest technologies, only the most efficient ones, even if such technologies are more expensive. However, of the metrics to be considered in this type of project, one of them is the cost (capital employed). Therefore, it is not always feasible to use the most expensive ones on the market. Many technical issues can help control the metrics of these projects, among which is the network topology (link interconnection). Multiobjective evolutionary algorithms (algorithms inspired by the evolution of the species) have been studied in the state of the art for the conception of network topologies. At the same time, clustering algorithms (algorithms specialized in separating samples into groups) have been used in other types of network studies. This study aimed to make use of computational intelligence algorithms in the construction of a network topology project, using the state of Pernambuco as a case study. In a first stage of the study, a clustering algorithm was used in the division of the state into groups. The intention of this part of the work was to measure the coverage of the network in relation to the entire size of the state, and thus ensure the completeness of the network. In addition, the clustering stage also aimed to propose a cost control model through the merging of different technologies for the network (Passive or active) depending on the function of the network segment. In a second step, an evolutionary multiobjective algorithm was used to compose several network topologies that served the clusters created in the previous step. This algorithm has evolved the various network topologies in order to improve four metrics, Blocking Probability, Cost, Energy Consumption and Algebraic Connectivity. The multiobjective algorithm was designed as a memetic algorithm, and, after a set of executions, the algorithm performances were compared with and without the alteration. The results of the tests, in the first stage, showed that the clustering techniques are quite efficient and adaptable to the proposed goal both in terms of network completeness and cost control. Already in the second stage, or multiobjective search stage, it was verified, through the use of a quality indicator (hypervolume), that there was an improvement of the algorithm in relation to convergence and diversity to the Pareto curve, with the use in its new form as memetic algorithm.Item Proposta de um meta-modelo para avaliação de robutez de redes de computadores com base na combinação de métricas topológicas(2017) Barros, Gustavo Henrique Pinto Soares de; Araújo, Danilo Ricardo Barbosa de; http://lattes.cnpq.br/2708354422178489; http://lattes.cnpq.br/1155438495823549A growing demand for resilience and robustness in the field of computer networks rises from the great diversity of its aplications. The modern sistems display an increasing critical nature, and the occurrence of perturbations may cause significant losses either human, monetary or environmental. Optical fiber acts on the current systems as the main mean of transportation. Among its variety of applications, which are heavily dependant on its infrastructure, some of them are the internet, cable television and high transmission rates systems. The non-homogeneous and complex topology nature of these networks determine their increasing avaluation cost. For these reasons, optical networks are the study object of this research. Quantifying the robustness of networks is usually accomplished by nodes and links failure simulations, on which the monetary and temporal cost scales proportionally to the network size. This research analyzes the possibility of obtaining values of robustness metrics in complex networks which would originally be obtained from simulations through an alternative regression method. This method has as inputs the values of simple metrics which are obtained through applications other than simulations and uses artificial neural networks to forecast simulation results in a smaller period. The results are obtained through a comparison between the proposed model output and the node and link failure simulation output. They indicate that the proposed model presents a satisfactory error margin, between 10−³ and 10−9, thus the simulation values were reached successfully through regression on a smaller time period.Item Uso de técnicas de detecção de comunidades para análise de redes ópticas(2021-12-09) Barros, Jonas Freire de Alcântara Marques de; Araújo, Danilo Ricardo Barbosa de; http://lattes.cnpq.br/2708354422178489; http://lattes.cnpq.br/6917406943428049The growth in the use of services on the Internet has promoted a increasing demand for high transmission rates. This demand have been met by optical networks. At the design stage of these networks, the engineer must be able to assess the performance of a given network before its actual physical implementation. In this design process, several topologies are considered. The comparison between topologies is made through metrics that indicate a certain aspect of the network. Typically the metrics considered are performance indicators, such as Throughput, Blocking Probability, Resilience and also other indicators, such as the network Cost. Performance indicators are important because they inform about the quality of a particular topology. Therefore, performance metrics are essential for the design projects of such networks. The most reliable way to calculate the values of these performance indicators is through simulations. However, simulations have a high computational cost, increasing the time needed to obtain information about topologies; since, in these projects, a very large number of different topologies must be considered. On the other hand, a large number of researches in the most diverse domains of knowledge have been carried out on the theme of community detection in graphs. However, there are no applications of these techniques in high capacity fiber-optic networks. Thus, the present work aims to investigate the existence of a correlation between the ability of a fiber-optic network to form communities and its performance indicators. More specifically, it’s Blocking Probability and indicators of Resilience. The analysis was performed comparing the Blocking Probability and Resilience of these networks and the clustering metrics using scatter plots. According to the results, there is a positive correlation between the community metrics and the network performance indicators, and comparatively a speedup of approximately 4,500 times was obtained between the community metrics and the simulations.Item Utilização de pentest na prevenção de ataques cibernéticos às organizações(2018) Vieira, Yago Dyogennes Bezerra; D'Emery, Richarlyson Alves; http://lattes.cnpq.br/3553920177544450With the evolution of technology, new devices are created, more users connect to the Internet and become addicted. Black hats have found that information and data are valuable to users and businesses and use knowledge for illicit purposes, stealing data, leaving companies totally inoperable after attacks, achieving profit or even competitive advantage. Knowing that no system is totally safe, criminals are looking for failures to innovate more and more in their attacks and only the big and medium companies are concerned about security, some medium and small only care when they suffer some type of damage resulting from a security breach of information. Even if companies invest in security it is necessary to apply it correctly, and an exploited vulnerability can compromise the entire corporate environment. Information security is an area of computing that aims to protect systems and devices against potential threats using the international standards and prevention recommended by experts in the field. Unknown to many companies, Pentest allows them to test their level of protection by testing the entire environment, simulating a real attack by a criminal, and measuring the risk and consequences of such attacks. Pentest is carefully carried out between contractor and contractor to ensure that none of your services stop while the tests are performed, you can still use a sequence based on certain methodologies, depending on the customer's needs. Given this scenario, in this monograph, it is discussed and proposed the use of intrusion testing in the prevention of cyber-attacks to organizations. The work showed that it was possible to carry out security tests in a company's computing environments, which would lead to the leakage, alteration and destruction of information from both the company and all its customers if they were discovered by a black hat. Real flaws were exploited in the computing environment of a company, which did not have the culture to protect its information. The main objective of the work was to demonstrate a method of security failure analysis (Pentest) and the use of some invasion techniques used by black hats, which if implemented by security teams will help to prevent attacks based on this type, organizations that must cultivate a culture of protection of their data, because even with all necessary security, no system is totally safe. As results there were flaws that could be exploited and consequently could cause damages such as: access leaving company systems unusable, destruction of data alteration and theft, disclosure of personal data without authorization, and if these risks occurred, would result in incalculable losses. Tests the company has been willing to invest in security and fix the flaws.
