100 episodes

The federal government is changing the way it handles data. It is transitioning from an on premises data center approach to the cloud. Further, it is getting data from a wide ranging number of sensors. Feds at the Edge is a podcast that addresses those concerns

Feds at the Edge FedInsider

    • Technology
    • 5.0 • 5 Ratings

The federal government is changing the way it handles data. It is transitioning from an on premises data center approach to the cloud. Further, it is getting data from a wide ranging number of sensors. Feds at the Edge is a podcast that addresses those concerns

    Ep. 147 Challenges of Continuous Compliance with a Remote Workforce

    Ep. 147 Challenges of Continuous Compliance with a Remote Workforce

    Compliance is difficult enough in an air-conditioned data center; taking this essential concept to an austere geography that has spotty communications with the potential of bullets flying makes it almost impossible.
    This disruption of communication has a new term, Denied Disconnected Latent, or DLL. When communications are restored, they still must maintain compliance standards.
    Today we get some perspectives on how to manage this arduous task.
    From a design perspective, an agency may have a process where the developers who deploy the application may not be the ones who make end points secure. As a result, a process must be worked out where the apps are updated and the security process for the end points are systematized as well.
    Jay Bonci from the U.S. Air Force describes how compliance can be checked during a regular maintenance process where central compliance information can be transferred to the field.
    Nigel Hughes from Steel Cloud shares that today, many systems administrators are executing this update through a set of tools. This manual process may have been tolerated with a few end points, today there is such a profusion that automation is needed.
    In a perfect world, one can scan assets, determine policy posture, examine apps, browsers, databases, baseline. If there is a drift – they can be snapped back into compliance.
    For more details, listen to the discussion because it delves into federated vs. centralized compliance and the theoretical debate over defining an end point in a world of platform-as-a-service.
     

    • 1 hr
    Ep. 146 The cyber wild west is still wild

    Ep. 146 The cyber wild west is still wild

    When the United States expanded westward, there was a surprise around every corner; in a similar vein, we see unlimited storage, fast speeds, and artificial intelligence creating a technical “wild west” environment for the federal government.
    Instead of a posse of Texas Rangers, we have a group of federal experts who have demonstrated their ability to corral malicious code and prevent robbers from stealing you blind.
    Marisol Cruz Cain from the GAO highlights some of the unpublicized aspects of AI. She mentions that its ability to rewrite code can make attribution difficult. In other words, AI can allow malicious code to mutate frequently, preventing any signature identification.
    Although the federal government has many cyber compliance requirements, the idea of using an independent group to attack a system was discussed. In the parlance of the cyber community, this is called a “red” team. They attack systems to see what weaknesses they can find. This effort can help address unanticipated weaknesses.
    One anticipated weakness that is sitting in the front of many is legacy systems. Paul Blahusch Dept of Labor recommends taking a prudent view of your system to see which ones are legacy and which have unique vulnerabilities. He suggests funds can be appropriated based on vulnerabilities.
    We are at a level where leaders may be confronted with cyber tools heaped upon cyber tools. JD Jack from Google suggests a practical approach called “security validation.”
    This gives leaders a report on what could happen in an attack. With this method, you look at the tools you have and find a way to evaluate them.

    • 59 min
    Ep. 145 Breaking the System into Tiny Little Pieces: a DoD approach to Zero Trust and micro segmentation.

    Ep. 145 Breaking the System into Tiny Little Pieces: a DoD approach to Zero Trust and micro segmentation.

    Tools | What to segment | floating data centers
    Four years ago, we needed to have panels define Zero Trust Architecture (ZTA). Today, the federal community recognizes the benefits of ZTA. That was the first hurdle; today, we have a panel that gives the “hows” of implementation, with a focus on micro-segmentation.
    When Angela Phaneuf worked at the US Army Factory called Kessel Run, they made themselves famous with innovation. Angela gives some practical tips on how to deploy ZTA.
    She explains that tools can assist in the move to micro-segmentation, however there are many. One approach that has worked for her is to assemble a catalog of tools that can help in a variety of environments.
    Dr. Cyril “Mark” Taylor shares with the audience his view of the priorities to accomplish change. He mentions policy first, culture, and finally, the technology itself. His experience with the military indicates that once a team has a well-defined goal, the transition can be made.
    For most of its recent history, the US Coast Guard has had to rely on slow satellite service. Captain Patrick Thompson informs the audience that today’s Coast Guard is looking at satellite service that can run as high as one hundred megabits per second (Mbps).
    Increased speed gives crews the ability to use more compute and storage at the edge – he calls today’s ships floating data centers.
    Sometimes, more data can lead to trouble. A system architect should know where micro-segmentation is a benefit. Dave Zukowski from Akamai suggests one looks at the risk profile of a system – just because they can does not mean they should be integrated.
     

    • 1 hr 1 min
    Ep. 144 Unlocking Modernization with AI Management: Meeting the Mission Imperative

    Ep. 144 Unlocking Modernization with AI Management: Meeting the Mission Imperative

     
    Today we hear perspectives on how AI can assist federal agencies. Kevin Walsh from the GSA provides observations at several federal agencies; Pritha Mehra from the U.S. Post Office gives practical examples of deployment.
    The overriding consensus is that AI is not the panacea to solve all federal technology problems. However, it has promise but must be approached cautiously to use its power to meet your agency’s mission.
    Kevin Walsh from the GAO has seen his share of federal agency AI implementation. He has concluded that effectiveness varies from agency to agency. He cautions that one cannot paint with a broad stroke and update every legacy system using AI. There may be some systems that are not a suitable candidate for any kind of upgrade. Further, some systems need to be turned off.
    He suggests you consider evaluating the risks associated with AI, which include oversight, false information, and acknowledging how AI makes decisions.
    Pritha Mehra from the Post Office gives outstanding examples of how the Post Office is already using AI to improve service. They are looking at processing, the network, and supply chain issues.
    They are at a maturity level where they are looking at optimizing delivery time for packages through AI analyzing data.
    It sure looks like other agencies can look at guidelines from the GSA as well as success stories from the Post Office to glean ways to apply AI to improve their respective agency.

    • 56 min
    Ep. 143 Generative AI in Government

    Ep. 143 Generative AI in Government

    Every headline one reads sings the praises of  Generative Artificial Intelligence; today’s interview showcases some successes and also, some aspects that federal users should be aware of. The discussion includes concepts like hallucinations, test beds, and establishing trust.
    When Chat GPT was released, there was an explosion of people lauding its benefits. Finally, one can vacuum up previous knowledge and present it in many formats. What has not been highlighted is that there can be serious glitches in this approach and produce narratives where Napoleon was part of the American Civil War.
    Sukhvinder Singh uses a common term in the field to define this. He calls it a “hallucination.” To prevent psychedelic experiences, Sukhvinder goes on to suggest one way to prevent these entertaining, but frustrating, results is to set up a test bed. That will allow federal leaders to experiment with data, try out governance processes, and gain a better understanding of cost.
    By now, it is well known in Generative AI that if you vary the prompt, you can vary the results of a query. These variations do nothing to assist a public-facing website like the TSA has. They seek Generative AI to provide repeatable operational responses.
    Generative AI is maturing fast. Listen to this discussion to see which use cases can make this innovative technology sustainable and trustworthy.
     

    • 59 min
    Ep. 142 Facing the Challenges AI Poses

    Ep. 142 Facing the Challenges AI Poses

    ChatGPT certainly has a great public relations department. It is portrayed as the answer to every conceivable problem for the beleaguered federal technology professional.
    Today, we sit down with a group of experts in what may be termed “Applied Artificial Intelligence.” We look at several aspects, including preparing your data for AI, the best applications for AI, and putting up guardrails to use AI safely.
    Mangala Kuppa from the Department of Labor indicates that every use case is unique. If one data set can yield valid results does not mean the results will be the same with another data set. In her experience, every use case will require you to do a bit more to ensure satisfactory results.
    Kurt Steege from Thundercat Technology catches the attention of the listener when he lists malicious code that has been enabled by AI. He refers to Dark Bard, Poison GPT, and Deep Fakes as examples of how malicious actors are using AI to attack.
    It is no wonder that the best tool to counter AI attacks is . . .  AI itself. For example, an AI attack enables such speed that a human may not be able to stop it. Julian Zottl indicates that we may not have a choice to whether or not to use AI in defense.
    When the interview winds up, the participants agree that AI should be framed within a broader context of the technology system itself. Reliable & trusted results from AI are not just technical solutions. They include governance, alignment with agency goals, and accountability.

    • 1 hr

Customer Reviews

5.0 out of 5
5 Ratings

5 Ratings

Top Podcasts In Technology

No Priors: Artificial Intelligence | Technology | Startups
Conviction | Pod People
Lex Fridman Podcast
Lex Fridman
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Acquired
Ben Gilbert and David Rosenthal
Hard Fork
The New York Times
This Week in XR Podcast
Charlie Fink Productions