Ausnahme gefangen: SSL certificate problem: certificate is not yet valid 📌 Using .NET for Apache® Spark™ to Analyze Log Data

🏠 Team IT Security News

TSecurity.de ist eine Online-Plattform, die sich auf die Bereitstellung von Informationen,alle 15 Minuten neuste Nachrichten, Bildungsressourcen und Dienstleistungen rund um das Thema IT-Sicherheit spezialisiert hat.
Ob es sich um aktuelle Nachrichten, Fachartikel, Blogbeiträge, Webinare, Tutorials, oder Tipps & Tricks handelt, TSecurity.de bietet seinen Nutzern einen umfassenden Überblick über die wichtigsten Aspekte der IT-Sicherheit in einer sich ständig verändernden digitalen Welt.

16.12.2023 - TIP: Wer den Cookie Consent Banner akzeptiert, kann z.B. von Englisch nach Deutsch übersetzen, erst Englisch auswählen dann wieder Deutsch!

Google Android Playstore Download Button für Team IT Security



📚 Using .NET for Apache® Spark™ to Analyze Log Data


💡 Newskategorie: Programmierung
🔗 Quelle: devblogs.microsoft.com

At Spark + AI Summit in May 2019, we released .NET for Apache Spark. .NET for Apache Spark is aimed at making Apache® Spark™, and thus the exciting world of big data analytics, accessible to .NET developers.

.NET for Spark can be used for processing batches of data, real-time streams, machine learning, and ad-hoc query. In this blog post, we’ll explore how to use .NET for Spark to perform a very popular big data task known as log analysis.

The remainder of this post describes the following topics:

What is log analysis?

Log analysis, also known as log processing, is the process of analyzing computer-generated records called logs. Logs tell us what’s happening on a tool like a computer or web server, such as what applications are being used or the top websites users visit.

The goal of log analysis is to gain meaningful insights from these logs about activity and performance of our tools or services. .NET for Spark enables us to analyze anywhere from megabytes to petabytes of log data with blazing fast and efficient processing!

In this blog post, we’ll be analyzing a set of Apache log entries that express how users are interacting with content on a web server. You can view a sample of Apache log entries here.

Writing a .NET for Spark log analysis app

Log analysis is an example of batch processing with Spark. Batch processing is the transformation of data at rest, meaning that the source data has already been loaded into data storage. In our case, the input text file is already populated with logs and won’t be receiving new or updated logs as we process it.

When creating a new .NET for Spark application, there are just a few steps we need to follow to start getting those interesting insights from our data:

  1. Create a Spark Session.
  2. Read input data, typically using a DataFrame.
  3. Manipulate and analyze input data, typically using Spark SQL.

Create a Spark Session

In any Spark application, we start off by establishing a new SparkSession, which is the entry point to programming with Spark:

SparkSession spark = SparkSession
    .Builder()
    .AppName("Apache User Log Processing")
    .GetOrCreate();

By calling on the spark object created above, we can now access Spark and DataFrame functionality throughout our program – great! But what is a DataFrame? Let’s learn about it in the next step.

Read input data

Now that we have access to Spark functionality, we can read in the log data we’ll be analyzing. We store input data in a DataFrame, which is a distributed collection of data organized into named columns:

DataFrame generalDf = spark.Read().Text("<path to input data set>");

When our input is contained in a .txt file, we use the .Text() method, as shown above. There are other methods to read in data from other sources, such as .Csv() to read in comma-separated values files.

Manipulate and analyze input data

With our input logs stored in a DataFrame, we can start analyzing them – now things are getting exciting!

An important first step is data preparation. Data prep involves cleaning up our data in some way. This could include removing incomplete entries to avoid error in later calculations or removing irrelevant input to improve performance.

In our example, we should first ensure all of our entries are complete logs. We can do this by comparing each log entry to a regular expression (AKA a regex), which is a sequence of characters that defines a pattern.

Let’s define a regex expressing a pattern all valid Apache log entries should follow:

string s_apacheRx = "^(S+) (S+) (S+) [([w:/]+s[+-]d{4})] "(S+) (S+) (S+)" (d{3}) (d+)";

How do we perform a calculation on each row of a DataFrame, like comparing each log entry to the above regex? The answer is Spark SQL.

Spark SQL

Spark SQL provides many great functions for working with the structured data stored in a DataFrame. One of the most popular features of Spark SQL is UDFs, or user-defined functions. We define the type of input they take and the type of output they produce, and then the actual calculation or filtering they perform.

Let’s define a new UDF GeneralReg to compare each log entry to the s_apacheRx regex. Our UDF requires an Apache log entry, which is a string, and will return a true or false depending upon if the log matches the regex:

spark.Udf().Register<string, bool>("GeneralReg", log => Regex.IsMatch(log, s_apacheRx));

So how do we call GeneralReg?

In addition to UDFs, Spark SQL provides the ability to write SQL calls to analyze our data – how convenient! It’s common to write a SQL call to apply a UDF to each row of data.

To call GeneralReg from above, let’s use the following SQL call:

DataFrame generalDf = spark.Sql("SELECT logs.value, GeneralReg(logs.value) FROM Logs");

This SQL call tests each row of generalDf to determine if it’s a valid and complete log.

We can use .Filter() to only keep the complete log entries in our data, and then .Show() to display our newly filtered DataFrame:

generalDf = generalDf.Filter(generalDf["GeneralReg(value)"]);
generalDf.Show();

Now that we’ve performed some initial data prep, we can continue filtering and analyzing our data. Let’s find log entries from IP addresses starting with 10 and related to spam in some way:

// Choose valid log entries that start with 10
spark.Udf().Register<string, bool>(
    "IPReg",
    log => Regex.IsMatch(log, "^(?=10)"));

generalDf.CreateOrReplaceTempView("IPLogs");

// Apply UDF to get valid log entries starting with 10
DataFrame ipDf = spark.Sql(
    "SELECT iplogs.value FROM IPLogs WHERE IPReg(iplogs.value)");
ipDf.Show();

// Choose valid log entries that start with 10 and deal with spam
spark.Udf().Register<string, bool>(
    "SpamRegEx",
    log => Regex.IsMatch(log, "\b(?=spam)\b"));

ipDf.CreateOrReplaceTempView("SpamLogs");

// Apply UDF to get valid, start with 10, spam entries
DataFrame spamDF = spark.Sql(
    "SELECT spamlogs.value FROM SpamLogs WHERE SpamRegEx(spamlogs.value)");

Finally, let’s count the number of GET requests in our final cleaned dataset. The magic of .NET for Spark is that we can combine it with other popular .NET features to write our apps. We’ll use LINQ to analyze the data in our Spark app one last time:

int numGetRequests = spamDF 
    .Collect() 
    .Where(r => ContainsGet(r.GetAs<string>("value"))) 
    .Count();

In the above code, ContainsGet() checks for GET requests using regex matching:

// Use regex matching to group data 
// Each group matches a column in our log schema 
// i.e. first group = first column = IP
public static bool ContainsGet(string logLine) 
{ 
    Match match = Regex.Match(logLine, s_apacheRx);

    // Determine if valid log entry is a GET request
    if (match.Success)
    {
        Console.WriteLine("Full log entry: '{0}'", match.Groups[0].Value);
    
        // 5th column/group in schema is "method"
        if (match.Groups[5].Value == "GET")
        {
            return true;
        }
    }

    return false;

} 

As a final step in our Spark apps, we call spark.Stop() to shut down the underlying Spark Session and Spark Context.

You can view the complete log processing example in our GitHub repo.

Running your app

To run a .NET for Apache Spark app, you need to use the spark-submit command, which will submit your application to run on Apache Spark.

The main parts of spark-submit include:

  • –class, to call the DotnetRunner.
  • –master, to determine if this is a local or cloud Spark submission.
  • Path to the Microsoft.Spark jar file.
  • Any arguments or dependencies for your app, such as the path to your input file or the dll containing UDF definitions.

You’ll also need to download and setup some dependencies before running a .NET for Spark app locally, such as Java and Apache Spark.

A sample Windows command for running your app is as follows:

spark-submit --class org.apache.spark.deploy.dotnet.DotnetRunner --master local /path/to/microsoft-spark-<version>.jar dotnet /path/to/netcoreapp<version>/LoggingApp.dll

.NET for Apache Spark Wrap Up

We’d love to help you get started with .NET for Apache Spark and hear your feedback.

You can Request a Demo from our landing page and check out the .NET for Spark GitHub repo to learn more about how you can apply .NET for Spark in your apps and get involved with our effort to make .NET a great tech stack for building big data applications!

The post Using .NET for Apache® Spark™ to Analyze Log Data appeared first on .NET Blog.

...



📌 Using .NET for Apache® Spark&#x2122; to Analyze Log Data


📈 91.43 Punkte

📌 .NET for Apache® Spark&#x2122; In-Memory DataFrame Support


📈 58.63 Punkte

📌 Apache Cordova bis 5.2.2 auf Android Log.v()/Log.d()/Log.i()/Log.w()/Log.e() Information Disclosure


📈 50.42 Punkte

📌 Apache Cordova up to 5.2.2 on Android Log.v/Log.d()/Log.i()/Log.w()/Log.e() information disclosure


📈 50.42 Punkte

📌 .NET for Apache Spark verbindet .NET-Entwicklung mit der Big Data Engine


📈 36.78 Punkte

📌 Data processing with .NET for Apache Spark | On .NET


📈 36.78 Punkte

📌 Scaling .NET for Apache Spark processing jobs with Azure Synapse | On .NET


📈 33.43 Punkte

📌 Get Started with Azure Data Explorer using Apache Spark for Azure Synapse Analytics | Data Exposed


📈 32.65 Punkte

📌 Get Started with Accessing Azure Data Explorer using Apache Spark for Azure Synapse Analytics | Data Exposed


📈 32.65 Punkte

📌 Next Generation SAP HANA Large Instances with Intel® Optane&#x2122; drive lower TCO


📈 31.49 Punkte

📌 Announcing new AMD EPYC&#x2122;-based Azure Virtual Machines


📈 31.49 Punkte

📌 Microsoft is a leader in The Forrester Wave&#x2122;: Streaming Analytics, Q3 2019


📈 31.49 Punkte

📌 Anzeige | Lässt nicht nur Männerherzen höherschlagen: 5 gute Gründe für den LEGO® Technic&#x2122; Liebherr Bagger


📈 31.49 Punkte

📌 Anzeige | Lässt nicht nur Männerherzen höherschlagen: 5 gute Gründe für den LEGO® Technic&#x2122; Lamborghini


📈 31.49 Punkte

📌 How to analyze the Apache log file with Goaccess


📈 31.11 Punkte

📌 How to analyze the Apache log file with Goaccess


📈 31.11 Punkte

📌 Bringing Big Data Analytics through Apache Spark to .NET


📈 30.48 Punkte

📌 CVE-2022-31777 | Apache Spark up to 3.2.1/3.3.0 Log cross site scripting


📈 29.56 Punkte

📌 Building a Data Lakehouse for Analyzing Elon Musk Tweets using MinIO, Apache Airflow, Apache Drill and Apache Superset


📈 28.78 Punkte

📌 t3n Daily: Adobe &amp;amp; Figma, Ethereum &amp;amp; NFT, Steuer &amp;amp; Homeoffice, KI &amp;amp; Gruselfrau


📈 28.29 Punkte

📌 Cisco präsentiert Spark 2.0 und Spark Whiteboard


📈 28.13 Punkte

📌 Cisco präsentiert Spark 2.0 und Spark Whiteboard


📈 28.13 Punkte

📌 Is Spark Still Relevant: Spark vs Dask vs RAPIDS


📈 28.13 Punkte

📌 Visualize and Analyze Network Log Data with Twingate and Datadog


📈 27.69 Punkte

📌 Machine Learning: SynapseML automatisiert .NET-Anbindung an Apache Spark


📈 27.13 Punkte

📌 Help us shape the future of .NET for Apache Spark


📈 27.13 Punkte

📌 How to analyze the dot net app using Linux


📈 27.02 Punkte

📌 How to Leverage Log Services to Analyze C&C Traffic


📈 26.69 Punkte

📌 Video-Tutorial: Data Science, Apache Spark & Python


📈 26.54 Punkte

📌 Palo Alto GlobalProtect App up to 5.0.8/5.1.1 Diagnostic Log PanGPS.log Password debug log file


📈 26.19 Punkte

📌 CVE-2013-1771 | Monkeyd on Gentoo Log File master.log log file (OSVDB-90602)


📈 26.19 Punkte

📌 CVE-2014-3536 | CloudForms Management Engine 5 Registration top_output.log Log log file


📈 26.19 Punkte

📌 Quick tip: Using Apache Spark Structured Streaming with SingleStore Notebooks


📈 25.95 Punkte

📌 Quick tip: Using Apache Spark and GraphFrames with SingleStore Notebooks


📈 25.95 Punkte

📌 Detecting Network Anomalies Using Apache Spark


📈 25.95 Punkte











matomo