Thursday, December 10, 2015

Student Loans and Higher Ed


It's the end of fall semester for universities across the nation.  I generally find this time of year sort of a bummer.  

The days are shorter and colder.  The winter holidays are around the corner which I'm sure elate many, but I find that the holidays bring up an array of social concerns I have about consumption.  My birthday is also around this time and that became way less exciting around 29.  

Perhaps most of all though, the end of fall semester is a time of reckoning.  The semester starts of with great hope.  Students and faculty are relaxed and tan from their summer fun. First years are bright eyed and eager to solve the world's most difficult issues.  We all have our highlighters, stickies, and organizers in hand- THIS will be the semester that we: get that grant, publish that paper, get straight A's, accomplish all great tasks.  

Then December rolls around and we find, very clearly, what we have accomplished in the past 4 months or so and what we did not.  

No matter.  The wonder of higher education is that every semester holds this same hopeful promise.  

And so, those like myself, that find peace in the emotional volatility of living semester by semester go back for more degrees.  

Then there are those of us that drive ourselves mad jumping the hoops and dancing the jig that enables us to remain in the university setting teaching, researching, and providing service in the name of enlightenment.

Many incur substantial debt to do so.  

The Washington Center for Equitable Growth, a think tank, put together an interesting, interactive national map showing, who has the most debt, the greatest incomes, and the most loan delinquency.  

The think tank claim several findings: 
  • "...borrowers with the lowest student loan balances are the most likely to default because they are also the ones likely face the worst prospects in the labor market."
  • "As median income increases in a zip code, so does the average loan balance, until income reaches approximately $140,000. After that, the relationship becomes flat." 
  • ..."we calculate that student debt absorbs around 7 percent of gross income in zip codes where median income is $20,000, declining to 2 percent in the highest-income zip code"
The researchers attribute the inverse relationship between income and delinquency to two main causes: 
First, although graduate students take out the largest student loans, they are able to carry large debt burdens thanks to their higher salaries post-graduation. One study of student loans by institution type reports a three-year cohort default rate for graduate-only institutions of 2 percent to 3 percent.* Second, the rise in the number of students borrowing relatively small amounts for for-profit colleges has augmented the cumulative debt load, but because these borrowers face poor labor market outcomes and lower earnings upon graduation (if they do in fact graduate), their delinquency rates are much higher. This is further complicated by the fact that these for-profit college attendees generally come from lower-income families who may not be able to help with loan repayments.
They continue with an argument that those of lower incomes lack access to fair credit markets and this also attributes to the their high debt load and delinquency,
It might seem counterintuitive that lack of access to credit results in delinquency—seemingly a problem of “too much debt.” But in fact, lack of access to credit and delinquency are two sides of the same coin. Nearly everyone needs access to credit markets to meet basic economic needs, and if they can’t get loans through competitive, transparent financial networks, poor people are more likely to be subjected to exploitative credit arrangements in the form of very high rates and other onerous terms and penalties, including on student loans. That disadvantage interacts with and is magnified by their lack of labor market opportunities. The result is exactly what we see across time and space: high delinquency rates for those with the least access to credit markets.
Recently, several state attorney generals banded together to force a a for profit college company, Education Management Corporation, to forgive $102.8 million in educational loans.  

According to a NYTimes article, the corporation was accused of "boiler-room tactics to enroll students who had little chance of succeeding in college."

Thus comes a rather difficult moral situation.  Even if we leave the idea of "for profit" out of it, still universities and colleges have real world business concerns.  Running a university is expensive.  There is infrastructure, people, stuff, activities, etc.  Thus, there is an interest in recruiting students to attend.  

As well, everyone should have the opportunity to attain higher education.  

But at what point does recruiting students become predatory?  Who gets to decide that a student will be unable to successfully complete a degree program?  Should we be able to go door to door to recruit for religion and political campaigns but not college?   

It is fairly common to judge the abilities of a prospective student by their transcript and academic record.  But when all the transcript signs point to little chance of success and a perspective student would like to try anyway, then what?  

The nation can offer free community college for students below a certain high school GPA and income threshold.  But, that doesn't seem very promising... though hopeful and well intentioned.  

Difficult indeed; and a problem that we have been unable to resolve in the past four months. 

Better luck next semester.

Wednesday, December 2, 2015

Are scientists like everyone else?

Over the past several years (at least) the Pew Center has been polling AAAS scientists to better understand their politics.

Earlier this year, the Pew Center demonstrated the answer to this post's question is, "No... or rather, yes but they are not representative of everyone else."  

Scientists are, to be sure, members of the (potentially voting) public.  But on a host of key issues in American politics, scientists think differently than a random handful of US adults. A handy graph at the link shows quickly and easily how the groups differ.

Scientists themselves have been documenting the difference between scientists/experts and laymen for some time.  Especially, as it pertains to diverging risk perceptions and the value of information.
The notable Daniel Kahneman recently wrote a popular book that covered his work on the matter (and a bunch of his other work).

Whether or not it is a problem that scientists are overwhelmingly Democrats is debatable.  Concern arises given the active role in policy formulation that scientists have in recent decades and the potential for research/university institutions to become too ideological.    

This concern may have some weight given a growing controversy around unhappy college students that feel threatened by opposing ideas or more simply, non-supportive ideas  here and here.


Yet, the the Pew Center reports the public largely considers scientists as politically neutral, scientists seem to overwhelmingly feel confident about taking an active role in politics.  


One can participate in politics without necessarily taking a political position.  For instance, if an elected representative asks a question, one can provide a variety of relevant information. 

But, I think this easier said than done.  First, it takes a certain amount of self awareness, personal restraint and emotional maturity that, in my experience, most people- let alone academics- do not have.  

Second, it is practically difficult to provide a vast amount of information especially when a main purpose of coming to an expert in the first place is to simplify the process.  That is, anyone can access Google. 

Choice of information to provide becomes important.  What information is relevant largely depends on the person making those decisions and what they value.  Hence, self awareness, personal restraint, and emotional maturity become key factors in the fairness of offered information.

So, what to do...  

Is it better/easier to convince the public that their beloved scientists are not as objective and politically neutral as fable leads to believe?  Or is it better/easier to encourage scientists to own their place in the world as a value laden Everyman?  

Either direction leads to pulling the curtain back, revealing OZ, and dispelling the magic, mystery and, it would seem, the largely unnoticed political power of scientists.     

Monday, November 23, 2015

Guest Post: Leveraging Knowledge



I have a guest post up at Socializing Finance, a blog dedicated to topics and issues in the world of Social Studies of Finance.  An excerpt it below.  You can read the rest on their blog here.
In a financial report for the US Woods Hole Oceanographic Institute, I read about their S&P Credit rating.  Among other things, the rating rests on relatively stable Federal funding.  Following the trail, I turned up credit ratings for other large research groups entire academic institutions. 
Brief skims of the credit reports indicates Federal funding as a key variable for credit rating decisions.  For instance, an except from Moody's review of UCAR (closely associated with the National Center for Academic Research):
STRENGTH: A substantial portion of UCAR's funding is received through a cooperative agreement from the National Science Foundation. 
CHALLENGE: UCAR is heavily reliant on federal funding for its research (98% of operating revenues are grants and contracts), with limited revenue diversification, exposing the organization to the risk of contract termination.
Ultimately, I am curious about what it means for the research- societal benefit connection when the credit worthiness of an institution is tied to Federal investment.  Does winning grants mean the institutions produces knowledge that advances societal goals or that the institution is just symbolically valuable?  

Or are the means supplanting the ends?  That is, financial stability was a means to the ends of good research but has come to be an end in itself.  

Check out the rest of the post at Socializing Finance, and let me know any thoughts.

Wednesday, November 18, 2015

NC Senate Bill 374: One Fish, Two Fish...

Today's post comes from Shelby White, a graduate student in Coastal and Ocean Policy.  Shelby's research focuses on regulation controversy between North Carolina's recreational and commercial fishing industries.  Her background is in marine science and the life and business of commercial fishing in NC with her family.


Those in careers that revolve around the water are subjected to some of the most contentious issues in North Carolina.  The for-hire industry is currently facing the possibility of a future logbook that would require each individual holding a For-Hire Coastal Recreational Fishing license to report the catch and effort data for each trip taken.  On November 10, members of the for-hire stakeholder advisory group met in New Bern to discuss expectations for future logbooks, as well as the positive and negative implications it could have for the industry.  This meeting was pursuant to Section 2 of Senate Bill 374, which repealed the requirement of a for-hire logbook.

Mention of requiring a for-hire logbook began in 2011, when the North Carolina Division of Marine Fisheries held a series of stakeholder meetings to receive feedback on proposed for-hire licensing changes.  In 2013, the General Assembly granted the Division the authority to require a mandatory for-hire logbook.  Throughout 2014, feedback was requested from the industry in regards to the construction and implementation of a logbook, including desired reporting methods.  The Division received $275,000 in federal funding to design a template and create mobile and internet applications that would allow for convenient reporting.  In 2015, another series of meetings were held resulting in significant opposition to the for-hire logbook.  The North Carolina Marine Fisheries Commission delayed the vote on the proposed for-hire logbook and the General Assembly removed the statutory authority to implement a log book.  Senate Bill 374 was adopted to repeal the mandatory requirement, as well as require the Division to study the advisability of a for-hire logbook in the future.  

From the perspective of the Division, a logbook would allow for more accurate data to guide management decisions, the ability to show the economic value of the industry in North Carolina, increased capability to establish accurate quotes and annual catch limits, the opportunity to identify rare event species, and the ability to better calculate harvest and discard rates.  To those involved in the industry, however, the logbook represents an infringement of valuable time and raises much concern about the future of the industry.  The concept of the for-hire logbook is similar to the Trip Ticket Program implemented in the commercial fishing industry.  Trip tickets are completed by the fish dealer to report catch and effort data, whereas the for-hire logbook would be completed by the charter captains themselves.

One group of for-hire fishermen support implementation of the logbook as a means to restore once viable fisheries, and the other is concerned the logbook will only result in inaccurate data and lead to further depletion of fish stocks.  Mandating a for-hire logbook will require each charter captain to fill out reports, regardless of their willingness to support a logbook.  The concern is that captains opposed to the logbook will report without accuracy, resulting in “bad” data that will be used by the Division for management decisions.  This raises the debatable question of whether logbooks should be required for all charter captains or only some that are willing to report accurately.  Inaccurate reports have the ability to skew management decisions and affect catch limits, as well as estimated stock sizes.  The matter then becomes a question of who should be allowed to participate in the report and whether or not these individuals should receive incentives, such as extra boat slippage, increased catch limits, or monetary compensation.  

The recreational for-hire industry is facing the depletion of valuable fish stocks and it is becoming increasingly difficult to make a living on the water.  Since 2011, the proposed logbook has received much opposition from the for-hire industry.  The most prominent concerns stem from uncertainty in how the logbook data will be used and how the Division can ensure the most accurate data.  Some argue that it is simply the cost of doing business, while others are reluctant to increase the already demanding work hours in the for-hire industry.  Although there is still apparent opposition to the implementation of a logbook, most in the for-hire industry have recognized that it is no longer a question of how or why it will be created, but when.  There is much need to understand the impact of the for-hire industry on the fish stocks of North Carolina and implementation of a logbook is an efficient method to acquire this data.  Even with contrasting views on the logbook, all charter captains can agree that with less fish to catch, their livelihoods are at stake.   

Thursday, November 12, 2015

Spurious Hurricane Trends in the Pacific

Hurricane Patricia October 23, 2015, GOES-15 visible animation.gif
"Hurricane Patricia October 23, 2015, GOES-15 visible animation" by National Oceanic and Atmospheric Administration (GOES-15 satellite); animation provided via the University of Miami's Rosenstiel School of Marine and Atmospheric Science - http://andrew.rsmas.miami.edu/bmcnoldy/tropics/patricia15/23Oct_VIS.gif. Licensed under Public Domain via Commons.
Modern times are characterized by an immense amount of scientific information.  Journals abound. Many experts maintain a blog. Popular press publish niche magazines for those that just can't get enough from their day job.

This reflects a (sometimes heated) conversation about what to believe about how the world works.  As well, the scientific debate is a process of checks and balances.  Sometimes scientists make mistakes.  Sometimes the best way to analyze data is not clear, obvious, or agreed upon by everyone.  The literature is better understood as a long drawn out discussion.

Often, this is taken a step further from what scientists argue is true to the inherently political realm of why it matters.  Therefore individual scientific publications cannot or perhaps, ought not, be divorced from the broader argument in which they are situated.

Recently, Nature, featured a debate about the predictability of tropical cyclones based on the ENSO index predictor (ENiño Southern Oscillation).  ENSO has two phases: El Niño (warmer than average) and La Niña (cooler than average).  The different phases, especially EL Niño, garner international attention because they play a significant role in climate variability and thus, the occurrence of extreme events such as...tropical cyclones.

In the three part correspondence, scientists debate methodological integrity.  This matters for what findings imply about the predictability of ENSO for TC activity.  This further matters for how society regards activity predictions which are used everywhere from the nightly news to policy advocacy groups to managing financial regimes and establishing insurance contracts.

(Despite its importance in society and policy, Nature is a subscription service.)

First: Research Paper
Jin, F-F., Boucharel, J., and Lin, I-I.  2014. Eastern Pacific tropical cyclones intensified by El Niño delivery of subsurface ocean heat. Nature. 516: 82-85.  (hereafter as JBL14)

The core of the JBL14, publication is as follows (from the abstract):
Here we show that El Niño—the warm phase of an ENSO cycle—effectively discharges heat into the eastern North Pacific basin two to three seasons after its wintertime peak, leading to intensified TCs. This basin is characterized by abundant TC activity and is the second most active TC region in the world. As a result of the time involved in ocean transport, El Niño’s equatorial subsurface ‘heat reservoir’, built up in boreal winter, appears in the eastern North Pacific several months later during peak TC season (boreal summer and autumn). By means of this delayed ocean transport mechanism, ENSO provides an additional heat supply favourable for the formation of strong hurricanes. This thermal control on intense TC variability has significant implications for seasonal predictions and long-term projections of TC activity over the eastern North Pacific.
Currently, the world is experiencing El Nino conditions- of the most intense on record.   Much research suggests that El Nino slightly increases hurricane activity in the Pacific Ocean basin but decreases it slightly in the Atlantic Basin.  It follows that the excitement of this summer's hurricane activity has been in the Pacific.  The season has produced the most intense western hemisphere TC on record, Hurricane Patricia (image above).

JBL14 find large correlations between ENSO and measures of tropical cyclone activity (i.e. ACE = accumulated cyclone energy) in the Eastern North Pacific (pictured here
Observed ENSO signals in the winter are thus good indicators of TC activity during the subsequent summer in the central to eastern North Pacific, with the potential to capture about 40–70% of the yearly ACE variability. Because of the environmental and societal impacts of intense hurricanes, and even though the individual TC tracking still remains a considerable challenge, this high predictability of extreme hurricane activity may be valuable for surrounding regions. 
Thus, the implications of JBL14 is that the worst is yet to come.  While this summer may have been exciting, just wait till next summer...  
El Niño events usually peak around Christmas time; warm T105 [top 105m ocean temperatures] anomalies discharged from the Equator as the aftermath of El Niño events will therefore peak during the following boreal summer and autumn, just in time for the active hurricane season in the Northern Hemisphere.
And of course, (as is standard fare) in closing, JBL14 give a plug for what this means for a the future under climate change predictions.  

Second: Comment
Moon, II-Ju, Kim, S-H., and Wang, C. 2015. El Niño and intense tropical cyclones. Nature. 526: E4-E5. (hereafter, MKW15)

Overall, MKW15 agree to the general theoretical conclusion of JBL14: "Specific big El Nino events" influence tropical cyclone activity.  But they challenge the significance of the applicability of the findings, "The connections are not robust enough to apply for the seasonal prediction for all types of ENSO events."

 The challenge is based on a questionable integrity of the analysis...
(1) the correlation between subsurface ocean heat delivered by El Niño and tropical cyclone activity is statistically exaggerated; and (2) wintertime ENSO conditions, which are claimed to have predictive value, are not strongly correlated with tropical cyclone activity during the subsequent summer. 
The first issue is one of data smoothing.  

Data smoothing is useful for identifying patterns in otherwise volatile data.  It is an attempt to identify the signal in the midst of the noise.  But interpreting smoothed data has caveats.

For one, it can create the impression of more dramatic trends than actually exist.  For instance, consider the two images below created from Google's Ngram tool using the phrase "political risk."  The one on the left has no smoothing.  The one in the middle has smoothing of 3.  The one on the right has smoothing of 40.


Clearly the one on the right makes things look forever increasingly politically risky.  This is because the smoothing process amplifies high or low values in subsets (i.e. in the above 3 years or 40 years), while minimizing annual variability.    

Which one is best or most true has to do with logic and the standards and general practices of your colleagues.

In addition, and most relevant for this discussion, is that smoothed data can create correlations that are simply statistical artifacts.  This occurs because the data is no longer independent.  Independence is a bedrock assumption for statistics and correlation calculations.  But I'll let other tech savvy experts and bloggers explain this here and here and here.

How smoothing is used in analysis matters for conclusions.  Smoothing and spurious correlations underly the controversy surrounding the infamous climate "hockey stick."  The core of the debate is whether or not the increased trend (that is, the part of the hockey stick that bats around the puck) is simply statistical artifact.  The validity of the statistics in that image has implications for its meaning and value in debates about climate policy.  See peer reviewed work on it here

Hence, MKW15 call out JBL14 for botching their smoothing analysis.  
[JBL14] used a three-year smoothing, which is a suitable technique when the physical variations being examined are multiannual. However, the use of three-year smoothing is not appropriate in this case because [JBL14] examined interannual variations of tropical cyclone activity, focusing on interseasonal connections between wintertime ENSO and summertime tropical cyclones. It turns out that the smoothing significantly increased the correlation between subsurface ocean heat delivered by El Niño (based on the principal component of the second empirical orthogonal function mode in ref. 4, PC2) and tropical cyclone activity from 0.29 to 0.62. The smoothing also enhanced the correlation of a bilinear regression model of [JBL14] from 0.37 to 0.64. (I removed references to publication images for legibility here)
In summary, while a correlation still exists it is about half as large as JBL14 report.

MKW15 also call out JBL14 for cherry picking data.  When comparing activity between El Nino and La Nina years, they used substantially unequal data sets: 43 months (El Nino) to 25 months (La Nina).  This matters because a core of the analysis is a comparison of total number of storms in the two data sets.
This is an unfair comparison, leading naturally to a higher number of tropical cyclones in the high-heat-content periods. An impartial comparison should examine the differences in terms of mean values for each month rather than the total number of tropical cyclones.
Finally, the MKW15 challenge the underlying assumption throughout JBL15's story.  
[JBL14] argued that observed ENSO signals (the Niño index) in the winter are good indicators of tropical cyclone activity during the subsequent summer in the eastern North Pacific. However, the correlation between the Niño index in the winter and ACE during the subsequent summer is very low (r = 0.18), which implies that the subsurface ocean heat delivered by El Niño has very little contribution (~3%) to the total variations of tropical cyclone activity in the subsequent summer.
Third: Reply
Jin, F-F., Boucharel, J., and Lin, I-I.  2015. Jin, et al. reply. Nature. 526: E5-E6. (hereafter JBL15)

JBL15 rationalize their smoothing process.  Remember above that the acceptability of smoothing techniques largely has to do with who your friends are.  So, JBL14 feel justified whereby MKW15 find the technique irresponsible.  

The main point of disagreement is the logic of the time frame of analysis.  JBL14 look at activity rates from year to year.  That is El Nino in year one will affect tropical cyclone activity in year 2.  However, applying a three year smoothing technique sort of "hides" these year to year changes in favor of broader brush stroke (multiyear) changes.  While MKW15 thinks that this sort of hiding is a problem, JBL15 says hiding the effects is exactly the point.  

I think MKW15 has a more convincing argument.  If one wants to know the impact of events in one year on the next, then seeking to hide those years where their is little to no effect sounds like cheating.

On the second critique, JBL15 argue that removing the months of no hurricane activity is important for accurate counting.  
[MKW15] argue that this significance is severely degraded when their ‘accurate’ counting of total number per month is used. However, we believe that their counting ignored one important fact: there are many months without any tropical cyclone (hurricane) occurrence in the record. We argue that those ‘hurricane-empty’ months should be removed for a truly accurate counting.      
I think that there is this thing that happens sometimes among people: those years in which a phenomena of interest does not occur are therefore, irrelevant.  

This is like my saying that my life is characterized by sadness and hardship based on a handful of unfortunate events rather than acknowledging the many years and of joy and fulfilling opportunities.  

Focusing on a subset of events does not characterize the whole story.  It's not accurate in the sense that it is not the whole story.  

Finally, JBL15 take up the accusation that their findings are not all that useful.  Their argument is largely theoretical.  In effect: The hight of El Nino is measured in winter over December.  The heat from strong El Nino's has to go somewhere.  The heat is discharged into the eastern Pacific.  Warm water is significant for tropical cyclone activity.  So, December El Nino measurements are predictive of next year's activity. 

However, MKW15 argue that the findings are not useful because the observations do not support the theory.  

The thing with tropical cyclone data is that it is limited.  It is difficult to say that one theory is more correct than another because many theories and often conflicting theories are supported by the limited amount of data.  Data is limited in all basins but ever more so in basins outside the Atlantic.   

So, predictability is perhaps itself, spurious.  What is most important is for what purpose these theories are being put forward and the wisdom of inserting them into specific decision making contexts.  

Is it for, Academic debate?  Beefing up emergency management preparations?  Changing insurance rates?   These each have very different real world outcomes.  

-----------

So, there you have it.  Debate, discussion and potentially, mistakes and inaccuracies in analysis.  

This is important for understanding what goes on in scientific journals and the significance of a body of work as compared to any individual article.  Picking a journal article out of the mix to represent significant advancements in knowledge is misleading at best.  

Tuesday, November 3, 2015

The size of "Marine" Science in the USA


Great stories are told through budget data:  (re)prioritizations of social values, the rise of new problems, and the fall of old problems.    

This post began with a question I had last week, "How big are the marine sciences?" 

A colleague at UNCW suggested that perhaps, while atmospheric studies are concentrated at large federal type institutions, marine science is scattered around.

In studying the earth, it is difficult to pull apart geophysical systems.  For instance, the climate is as much a product of the atmosphere as it is the ocean.  The study of fisheries is similar.  Perhaps the ocean- atmosphere is better considered a gradient then two separate entities.

A human attribute is that we like to categorize: food/drink, man/woman, white/black, despite what the best of our scientific knowledge tells us about the difficulties of doing so.

Budgets seem to be exemplar of categorization.  In asking about the size of marine science it is useful to start with public financing of the activity. 

First things, there is remarkable Federal support for research and there has been for some time.  This is ought no longer be astonishing news.  See for example Frontiers of Illusion


US budget data for this graph and the next are from the White House here

Clearly, most research money goes to defense.  But, since the end of the cold war, Defense and non-defense R&D has seen a rough 50/50 to 60/40 split.

NSF funding, the provider of much academic research, has maintained a small but constant and slightly increasing share of US non-defense funding.  The 1954-2016 average is 5%.

US social value priorities is evidenced by comparing NSF funding to that of NIH.

By comparison, NSF is very small.  The 1960-2016 average is 30%.  Though since reaching that 30% benchmark in the late 1980's NIH funding has grown to a 50% of non-defense R&D spending.


The NSF GEO division 
supports research spanning the Atmospheric, Earth, Ocean and Polar sciences. As the primary U.S. supporter of fundamental research in the polar regions, GEO provides interagency leadership for U.S. polar activities.
In 2014, GEO's budget was $1.3 Billion.  The 2015 estimated and 2016 requested is somewhat similar give or take several million (these numbers may not be adjusted for inflation??).  So, GEO funding is about 1/4 of NSF's budget (NSF in 2014 = $5.198B)
  
Particularly interesting,
GEO provides about 64 percent of the federal funding for basic research at academic institutions in the geosciences.
This means that geophysical research at universities are particularly sensitive to NSF budget fluctuations.

Of GEO's funding, about 30% goes to their Ocean Science division (i.e. ~$356.3 million).  I have not been able to come up for historical data on this.  Perhaps, I'll ask around.

The Ocean Science Division covers such wonders as, Biological/Physical Oceanography, Ocean Drilling, and Marine Geology.

This of course does not include the outlays for NOAA.  Historical data would be handy here but I couldn't turn it up in a readily accessible manner.  

In 2015, NOAA operated about a $5.5 billion budget.  This is about about 40% of the Department of Commerce budget (see budget data link above)

Based on their 2016 budget request, we can note a couple of activities very marine science-y:
  • National Oceanic Service: $574 million
  • National Marine fisheries Service: $990 million
If you put in some other works that fall into that gradient of ocean-atmosphere:
  • Oceanic and Atmospheric Research (OAR): $507million
Then there is NOAA's big ticket programs
  • National Environmental Satellite, Data and Information Services (NESDIS): $2,380 million
  • National Weather Service (NWS), which now does all sorts of marine like forecasts such as rip tides: $1,099 million
These programs are more complicated because they are neither simply marine nor atmosphere.  So, I'll just leave it at that.

So let's say the size of "marine" science based on Federal funding is somewhere around...

$1.92 billion if you consider things that are explicitly marine or ocean; and 
$2.43 billion, if you include NOAA's OAR but exclude their big ticket NESDIS and NWS

This is about 0.01% of US GDP.

Of course this doesn't include any state funding.  For instance, in 2014-2015 North Carolina contributed about $10 million to Marine Fisheries Research and Management.  Nor does it include private funding or NIST.

This doesn't clarify my colleague's point (perhaps marine science is scattered about).  Nor do these estimates show any comparison (is marine science a greater or lesser US priority).

It does however, answer the general question I started off with,  How big is marine science?

It is about $2 billion give or take.  It is about 7% of NSF's budget and about 0.01% of US GDP.

That is of course, if you accept my characterization of marine science.

Google does not. It quickly routes me to Oceanography.  Wikipedia routes me to a vague explanation of things relating to the sea or ocean.

Where does marine science end and studying the rest of the geophysical system begin?  

Wednesday, October 28, 2015

Welcome!



This is a new project of the faculty and students of the Coastal and Ocean Policy program at UNCW.
We will post things that look at the nitty gritty of science policy and politics.  The interesting stuff and the drama- the stuff of politics- is in the details and the nuance.  That stuff is hard to get at and even harder to articulate.  So, this is a space of practice.



Why Haint Blue?

Well, in part because, one of your bloggers here, Jessica Weinkle, likes the phrase and since moving to Wilmington has become fascinated with the tradition.  

But also, because it is apropos:

The tradition as I understand it [and I am finding my information on the internet so it must be true =)], is that the Gullah or Geechee people of the South would paint openings of their homes a watery blue to keep evil spirits at bay.  It was believed the malevolent haints (or haunts) could not cross water.  Hence, the blue paint tricks them into thinking that the home is surrounded by water thereby protecting the home.

Today, haint blue adorns homes of many types of southern people.

The color and the tradition speaks to the many (many!) efforts, beliefs, and knowledges that people employ to help manage the great "hazards and vicissitudes of life."  Some acts are simple such as, painting the front door blue.  But, in modern times as expertise has become a cornerstone of contemporary political battles, our efforts to ward off troubles is remarkably complex.

Yet, no matter the color an issue is painted, underneath often lay fear and hope for improved decision making to reduce uncertainty and ensure that the future is by some respect, favorable.  Of course, we may disagree about what to fear, what to hope for, and what a favorable future looks like.

The devil lay in the details, as the saying goes, and often the details become wards of technical and scientific expertise.  Wadding through the details, identifying conflicts in social value objectives, and clarifying policy alternatives is an overriding goal of the Coastal and Ocean policy program.  Haint Blue is a means to that practical and educational end.