100 episodios

The federal government is changing the way it handles data. It is transitioning from an on premises data center approach to the cloud. Further, it is getting data from a wide ranging number of sensors. Feds at the Edge is a podcast that addresses those concerns

Feds at the Edge FedInsider

    • Tecnología

The federal government is changing the way it handles data. It is transitioning from an on premises data center approach to the cloud. Further, it is getting data from a wide ranging number of sensors. Feds at the Edge is a podcast that addresses those concerns

    Ep. 150 Hard Truths of Data Security in the Public Sector

    Ep. 150 Hard Truths of Data Security in the Public Sector

    Every reader has heard the phrase, “Lulled into complacency.”  One may have completed a checklist and can sit back and feel secure. It can be a false security.
    Today’s explosion of data and reliance on compliance has led to a situation where federal agencies can be subject to attack from a vector that was not anticipated.
    The Zero Labs report from Rubrik shows how much data has grown:   
    Data:             25% growth in data year-over-year for most organizations  
    Cloud:            61% growth in cloud
    SaaS:           200% increase
    This growth is detailed in statistics from data.gov. They state that 250,000,000 data sets being used by the public sector. The bad news: generative AI will create more data.
    Best practices to steal yourself against attack include identifying where the data is stored, prioritizing what to protect, and collaborating with humans to determine who has access and when.
    Travis Rosiek from Rubrik explains how he was working with an agency in a backup capacity. When they tried to determine what to back up, they discovered sensitive data where it should not be.
    All agencies have a limited budget for data protection. Travis Rosiek recommends finding the most sensitive data and prioritizing protection there.
    Malicious actors know the vulnerable moments in a large organization. When someone leaves, weekends, and holidays. Managers should consider covering aspects of security when these events present themselves.
    One entertaining “human” problem Travis Rosiek reveals is hoarding data. Simply keeping data for eternity can open a federal agency to malicious actors who have hidden attack codes in the data.
    The lesson: move beyond compliance and think strategically about how your agency will get attacked.

    • 56 min
    Ep. 149 How Agencies can Adopt AI Swiftly and Securely

    Ep. 149 How Agencies can Adopt AI Swiftly and Securely

    By now, we have seen demonstrations of Artificial Intelligence summarizing content and even producing images. These are all great YouTube videos for a rainy Saturday afternoon, but what about the work of the government?
    With AI, one must begin with the data. When it comes to explaining how to leverage the petabytes of information, Karen Hall has a memorable quote.
    “Generative AI can unlock the knowledge trapped in data.”
    Her four guidelines for releasing this information are
    ·       Make sure the data is authoritative
    ·       Enable connectivity to other systems
    ·       Be aware of data standards
    ·       Use AI in a responsible manner.
    AI requires mountains of data to see patterns and help humans make conclusions. Government agencies may have sensitive information in their data stores, making it difficult to assemble meaningful data stores.
    Dr. Travis Hall from NTIA suggests that you can use AI to protect personal information. AI can be used as a privacy enhancing technology by being able to obfuscate data so trends can be seen to save money and speed up operations.
    Our expert from California, Hong Sae, provides many ways AI can assist government functions. He lists predicting traffic patterns, locating potholes, voice analytics customer service, gunshot detection, and predicting crime patterns.
    It is the early days of applying AI in a fast and secure manner. This discussion gives listeners the basic building blocks for success,

    • 59 min
    Ep. 148 Ai can set a new standard for customer service.

    Ep. 148 Ai can set a new standard for customer service.

    Everyone wants to pick up the phone and quickly get a human who has an immediate, correct, response. On the other hand, government institutions are characteristically understaffed and underfunded. The challenge is to apply modern technology to improve customer service within the allotted budgetary constraints.
    Amanda Nabours suggests that an answer that is one hundred percent correct must begin with the data used to provide answers. Data stores must prevent bias and privacy must be protected.
    Right now, her agency is in an exploratory phase, but she notes that one key aspect of a successful deployment must be training employees before a role out of what to expect when AI is relied upon to provide answers to citizen questions.
    Google’s Tony Orlando expands on the robust nature of adding AI to citizen experience. He details how AI can improve the speed of response, automate reporting tasks, provide a more personalized experience, and even reduce fraud.
    During the interview Tony Orlando expands on six models to improve citizen experience, everything from improved reporting to optimizing traffic.
    This may be a great practical application of AI for government.

    • 55 min
    Ep. 147 Challenges of Continuous Compliance with a Remote Workforce

    Ep. 147 Challenges of Continuous Compliance with a Remote Workforce

    Compliance is difficult enough in an air-conditioned data center; taking this essential concept to an austere geography that has spotty communications with the potential of bullets flying makes it almost impossible.
    This disruption of communication has a new term, Denied Disconnected Latent, or DLL. When communications are restored, they still must maintain compliance standards.
    Today we get some perspectives on how to manage this arduous task.
    From a design perspective, an agency may have a process where the developers who deploy the application may not be the ones who make end points secure. As a result, a process must be worked out where the apps are updated and the security process for the end points are systematized as well.
    Jay Bonci from the U.S. Air Force describes how compliance can be checked during a regular maintenance process where central compliance information can be transferred to the field.
    Nigel Hughes from Steel Cloud shares that today, many systems administrators are executing this update through a set of tools. This manual process may have been tolerated with a few end points, today there is such a profusion that automation is needed.
    In a perfect world, one can scan assets, determine policy posture, examine apps, browsers, databases, baseline. If there is a drift – they can be snapped back into compliance.
    For more details, listen to the discussion because it delves into federated vs. centralized compliance and the theoretical debate over defining an end point in a world of platform-as-a-service.
     

    • 1h
    Ep. 146 The cyber wild west is still wild

    Ep. 146 The cyber wild west is still wild

    When the United States expanded westward, there was a surprise around every corner; in a similar vein, we see unlimited storage, fast speeds, and artificial intelligence creating a technical “wild west” environment for the federal government.
    Instead of a posse of Texas Rangers, we have a group of federal experts who have demonstrated their ability to corral malicious code and prevent robbers from stealing you blind.
    Marisol Cruz Cain from the GAO highlights some of the unpublicized aspects of AI. She mentions that its ability to rewrite code can make attribution difficult. In other words, AI can allow malicious code to mutate frequently, preventing any signature identification.
    Although the federal government has many cyber compliance requirements, the idea of using an independent group to attack a system was discussed. In the parlance of the cyber community, this is called a “red” team. They attack systems to see what weaknesses they can find. This effort can help address unanticipated weaknesses.
    One anticipated weakness that is sitting in the front of many is legacy systems. Paul Blahusch Dept of Labor recommends taking a prudent view of your system to see which ones are legacy and which have unique vulnerabilities. He suggests funds can be appropriated based on vulnerabilities.
    We are at a level where leaders may be confronted with cyber tools heaped upon cyber tools. JD Jack from Google suggests a practical approach called “security validation.”
    This gives leaders a report on what could happen in an attack. With this method, you look at the tools you have and find a way to evaluate them.

    • 59 min
    Ep. 145 Breaking the System into Tiny Little Pieces: a DoD approach to Zero Trust and micro segmentation.

    Ep. 145 Breaking the System into Tiny Little Pieces: a DoD approach to Zero Trust and micro segmentation.

    Tools | What to segment | floating data centers
    Four years ago, we needed to have panels define Zero Trust Architecture (ZTA). Today, the federal community recognizes the benefits of ZTA. That was the first hurdle; today, we have a panel that gives the “hows” of implementation, with a focus on micro-segmentation.
    When Angela Phaneuf worked at the US Army Factory called Kessel Run, they made themselves famous with innovation. Angela gives some practical tips on how to deploy ZTA.
    She explains that tools can assist in the move to micro-segmentation, however there are many. One approach that has worked for her is to assemble a catalog of tools that can help in a variety of environments.
    Dr. Cyril “Mark” Taylor shares with the audience his view of the priorities to accomplish change. He mentions policy first, culture, and finally, the technology itself. His experience with the military indicates that once a team has a well-defined goal, the transition can be made.
    For most of its recent history, the US Coast Guard has had to rely on slow satellite service. Captain Patrick Thompson informs the audience that today’s Coast Guard is looking at satellite service that can run as high as one hundred megabits per second (Mbps).
    Increased speed gives crews the ability to use more compute and storage at the edge – he calls today’s ships floating data centers.
    Sometimes, more data can lead to trouble. A system architect should know where micro-segmentation is a benefit. Dave Zukowski from Akamai suggests one looks at the risk profile of a system – just because they can does not mean they should be integrated.
     

    • 1h 1m

Top podcasts en Tecnología

Programa tu mente
Daniel Cubillos
Securiters
Marta Barrio Marcos
9to5Mac Daily
9to5Mac
Y Combinator Startup Podcast
Y Combinator
El Siglo 21 es Hoy
@LocutorCo
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC